Learning to Program with Personal Robots ... - ACM Digital Library

8 downloads 13286 Views 612KB Size Report
One of the goals of using robots in introductory programming courses is to increase ... its motivational effects on non-computer science students in a CS0 course.
Learning to Program with Personal Robots: Influences on Student Motivation MONICA M. MCGILL, Bradley University, Peoria, IL, USA

One of the goals of using robots in introductory programming courses is to increase motivation among learners. There have been several types of robots that have been used extensively in the classroom to teach a variety of computer science concepts. A more recently introduced robot designed to teach programming to novice students is the Institute for Personal Robots in Education (IPRE) robot. The author chose to use this robot and study its motivational effects on non-computer science students in a CS0 course. The purpose of this study was to determine whether using the IPRE robots motivates students to learn programming in a CS0 course. After considering various motivational theories and instruments designed to measure motivation, the author used Keller’s Instructional Materials Motivation Survey to measure four components of motivation: attention, relevance, confidence, and satisfaction. Additional items were added to the survey, including a set of open-ended questions. The results of this study indicate that the use of these robots had a positive influence on participants’ attitudes towards learning to program in a CS0 course, but little or no effect on relevance, confidence, or satisfaction. Results also indicate that although gender and students interests may affect individual components of motivation, gender, technical self-perception, and interest in software development have no bearing on the overall motivational levels of students. Categories and Subject Descriptors: K.3.2 [Computer and Information Science Education]: Computer Science Education and Curriculum General Terms: Experimentation, Human Factors, Languages Additional Key Words and Phrases: Motivation, education, curriculum, personal robots, Myro, CS0, CS1, programming ACM Reference Format: McGill, M. M. 2012. Learning to program with personal robots: Influences on student motivation. ACM Trans. Comput. Educ. 12, 1, Article 4 (March 2012), 32 pages. DOI = 10.1145/2133797.2133801 http://doi.acm.org/10.1145/2133797.2133801

1. INTRODUCTION

Over the years, computing faculty have experimented with and used learning environments in an effort to improve student motivation, particularly among novices. These learning environments vary and they include environments for contextualizing learning through multimedia, using learning environments that are novice-friendly, and using programmable devices such as robots [K¨olling and Rosenberg 2001; Lego 2010; Rich et al. 2004]. There are several types of robots that have been used in the courses to teach a variety of computer science concepts. Lego Mindstorms, for example, are one of the most popular, giving novice students the task of first building a device from Lego blocks,

Author’s address: M. M. McGill, Department of Interactive Media, Bradley University, Peoria, Illinois; email: [email protected]. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permission may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701, USA, fax +1 (212) 869-0481, or [email protected]. c 2012 ACM 1946-6226/2012/03-ART4 $10.00  DOI 10.1145/2133797.2133801 http://doi.acm.org/10.1145/2133797.2133801

ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:2

M. M. McGill

then programming the device to behave in whichever fashion they wish [Cliburn 2006; Lego 2010]. Others include CSbots, Qwerkbots, iRobot Create, and IntelliBrain-Bot Deluxe for Java [Irobot Create Forum 2011; Lauwers et al. 2009; Mawhorter et al. 2009; Weiss and Overcast 2008]. It is important to investigate empirically whether or not learning environments actually have the effect on student motivation, since many of these systems were built for that specific purpose. Also of interest is the effect of differences in learning preferences among various groups, as has been noted in studies of other learning environments. It has been shown, for example, that females prefer real-world assignments to games when learning how to program, while males prefer assignments involving games [Wilson 2006]. Previous studies have also suggested that material in introductory courses can be irrelevant to non-majors [AAUW 2000; Margolis and Fisher 2002]. At the same time, there are also an increasing number of courses teaching programming to non-CS majors, including students majoring in information systems, multimedia, interactive media, and more [Barr et al. 2010]. Courses have been created to specifically address differences in various groups studying computing, particularly underrepresented groups such as women [Rich et al. 2004]. Likewise, it is important to know whether the use of robots in the classroom has either a positive or negative affect on motivation for groups with different characteristics, such as gender, technical self-perception, or interest in development, where development is defined as interactive projects, coding/programming, and software development. This is especially important for faculty who specifically choose to teach introductory programming courses with robots to increase motivation and also for those with limited time and resources needed to adapt and use robots in a classroom [Weiss and Overcast 2008]. For a newly introduced course in an Interactive Media program, the author searched for a tool to use that might enhance student motivation. After investigating several tools and environments, the author chose a personal robot developed for educational use by the Institute for Personal Robots in Education [IPRE 2011]. Since this was an important introductory course for students, the author wanted to know whether the robot was effective in motivating students, and particularly this group of non-CS majors, to learn programming. Therefore, a formal study was designed and implemented to investigate this. The purpose of this study was to determine whether using the IPRE robots motivates students to learn programming in a CS0 course. The overarching questions for this research study were the following. — How does the use of robots throughout the course affect student motivation? — Is there a relationship between motivation levels of students learning programming with the robots and gender? — Is there a relationship between motivation levels of students learning programming with the robots and their technical self-perception? — Is there a relationship between motivation levels of students learning programming with the robots and their interest in development? As robots become more prevalent in society, they are also moving more into computing courses, both as a learning tool and as a tool to be learned. Previous studies have examined the impact of robots in coursework [Adams 2010; Cliburn 2006; Lauwers et al. 2009], some of which have focused on the impact of robots in CS0 courses. There has been only a handful of articles on the use of IPRE robots in courses, and only one independent study of its affect on students [Martins et al. 2010; Summet et al. 2009]. ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

Learning to Program with Personal Robots

4:3

In these articles, the authors used methodologies that did not define motivation nor provide a case that the survey instruments used were valid or reliable. The study reported in this article employs a methodology that is designed to address this gap. This work may be useful for instructors who may be considering adopting an environment for increasing motivation among students, particularly one that uses the IPRE robots. It may also be useful to researchers who might be considering the motivational effects of robots on men versus women. Instructors in the rapidly increasing fields at the periphery of computer science, where teaching programming is important, but software development is not their primary goal, may also benefit in knowing whether these students are motivated by the use of robots. This article also introduces computer science educators to a formal technique of defining motivation through quantitative data collection via a standardized survey instrument that can be reused when evaluating motivational effects of other robots and learning environments. 2. TEACHING WITH ROBOTS

This review contextualizes previous published experiences and research in two areas, robots that have been developed and used prior to the introduction of the IPRE robots as well as the IPRE robots. 2.1. Robots in the Classroom

Robots have been used to teach computer science concepts in various courses, including secondary school outreach courses, CS0, CS1, CS2, and advanced courses [Adams 2010; Cliburn 2006; Lauwers et al. 2009; McNally 2006; Mosley and Kline 2006]. The most popular choice of robots is Lego Mindstorms, which has good documentation, an active development community, solid platform, and flexibility [Weiss and Overcast 2008]. Instructors of CS0 courses have used Lego Mindstorms due to their ability to reinforce the mental model of concepts visually [Adams 2010]. Lego Mindstorms require students first build their robot from Legos with electronic pieces [Lego 2010]. They have a “show and tell” appeal that engages students and enables them to observe the works of other students [Adams 2010; Cliburn 2006]. The platform also allows instructors to create open-ended assignments that provide students with the capability to explore. The focus is primarily on design, not coding, and instructors have also integrated competitive elements into the course using Lego Mindstorms. CS1 and CS2 courses can also benefit from the Lego Mindstorms environment. It has been observed that using the robots stimulates intrinsic motivation, creativity, and problem solving among students [Apiola et al. 2010]. It has also been observed that the robots are fun for students to use and they are enthusiastic about using them [Imberman and Klibaner 2005]. The robots lend themselves to constructionist approaches to teaching, and finding solutions to technical problems is a core component in the learning process. Other robots that have been used in the classroom are CSBots, Qwerkbots, iRobot Create, RoboCode and Karel the Robot [Becker 2001; Lauwers et al. 2009; Mawhorter et al. 2009; Mosley and Kline 2006; O’Kelly and Gibson 2006; Weiss and Overcast 2008]. Some statistical data was gathered in research on using CSBots in CS1, with the intention of addressing motivation and retention through increased relevance of course work. Students overall received better grades and were more engaged throughout the course and completed more assignments. But one of the primary goals, increasing student retention rate, did not transpire [Lauwers et al. 2009]. iRobot Create, which has good documentation for its Windows environment, but limited community support, has also been used to teach advanced CS concepts involving ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:4

M. M. McGill

algorithms [Mawhorter et al. 2009; Weiss and Overcast 2008]. A service project designed for middle school students used Lego Mindstorms for the same reason with positive feedback from participants [Mosley and Kline 2006]. A virtual robot, Karel the Robot, has also been used extensively in CS1 courses [Becker 2001]. The environment emphasizes the visual aspect of programming with robots and capitalizes on the “robots are fun” mantra. Another virtual robot environment, RoboCode, has been used as a tool to teach CS1 and AI to increase student engagement in a problem-based learning course [O’Kelly and Gibson 2006]. Both Karel the Robot and RoboCode are software tools, and it is not known whether the same motivational effects exist for these compared to physical robots. 2.2. IPRE Robots

Kumar et al. [2008] define the reasons for the creation of the IPRE Scribbler/Myro learning environment and how it is different than other robots used for entry-level programming. The developers note that the robots were created as a personal robot that enables creative productions in a social, accessible, and engaging environment. While other environments encourage robotic competition, this robot would focus less on competition and more on robots in storytelling performances. According to the developers, the needs of the curriculum (primarily CS1) drove the design of the robot and design was focused on creating tools that were easy to use. The desire to strive for students to bring in creative concepts and storytelling through robots via openended assignments to increase motivation was also a consideration [Balch et al. 2008]. Additionally, the design for the robot and its curriculum was also meant to expand preconceived notions of computing [Summet et al. 2009]. However, no data are provided in these articles that investigates whether these goals have been achieved. Though there is at present little formal literature evaluating the use of the IPRE robots in the classroom, there are articles claiming that there may be value in their use. The IPRE developers, including Blank et al., present the majority of these results. As developers, it is then imperative for these studies to show sufficient methodology in their study to overcome developer biases; however, this has not been done. Xu et al. [2008] provide no formal research methodology and no quantitative or qualitative results in their article, but claim that they found that using robots engaged students at a hands-on level and also found the robot to be tangible, exciting for students to use, and relevant for students based on surveys. Summet et al. [2009], claim that early results show success in a CS1 course and that the personal robot is relevant and fun for students. Again, developer bias is not addressed, and there are no formal quantitative or qualitative results provided. In the only study of the robots by researchers that were not part of the IPRE team, Markham and King [2010] conducted an experimental study of the effectiveness of its use in a CS1 course. They attempted to measure attitude, expectations, and experiences using their own survey instrument that had not been proven to be valid. The pre-course survey collected data about student attitudes and expectations, while the post-course survey collected data about student attitudes and experiences. The authors noted that several of the questions on the survey turned out to be ambiguous and in need of rewording, rendering the data inconclusive. Even so, the researchers claim that students using the robots spent extra time on class-related work that wasn’t required. Outside of this, no significant results were found between the control and treatment groups that used the robots in class, indicating that there is little value in using the robots. They mention that the robots are “experimental hardware” and that there will be “obstacles and inevitable challenges” as a result (p. 207). Some of the technical problems that were frustrating to students include excessive battery usage, ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

Learning to Program with Personal Robots

4:5

connection problems, and the fact that the software was not tied to the most recent version of Python. The impact of these frustrations on novice programmers is not addressed. Other researchers have used the IPRE robot to teach AI algorithms, though no results are presented [Mawhorter et al. 2009]. Research performed by Weiss and Overcast [2008] acknowledged that the environment is not ideal, and choosing this robot over Lego Mindstorms or iRobot Create depends on curriculum needs and budget. The robot has several positive features, including good documentation, variety of features, and the ability to develop the software on a variety of platforms, but they note that it does not have nearly as much wide-hobbyist support as the other two robots. 2.3. Summary

Both physical and virtual robots have been used to teach a wide variety of computer science concepts, with the most popular being Lego Mindstorms. However, many other robots have been developed that are designed to be “in a box ready,” so students can immediately start the process of programming and learning related concepts, bypassing the robot-building and engineering processes required by Lego Mindstorms. One of these robots, the IPRE robot, has been designed specifically for novice programmers. Despite its use since 2007, the educational research stops short of a formal study that looks at its effectiveness and whether or not the motivational aspects that have been claimed in several articles actually exist. 3. STUDENT MOTIVATION

Defining motivation is complex. Measuring motivation based on these definitions is even more so. Over the last several decades, psychologists have worked to define and refine what comprises motivation. Initially, research on motivation as a science was centered on broad perspectives embedded in efforts to improve the function of government organizations and workplace productivity. Motivational theories from the late nineteenth century emerged from the scientific management philosophy, which took an approach heavily rooted in extrinsic motivation [Taylor 1916]. Scientific management stated that workers would make the decisions to work efficiently when the payment system (i.e., reward) was tied directly to the amount of work completed in the least amount of time. As this theory started to evolve through workplace implementation, psychologists like Maslow, Skinner, and others proposed their own theories, looking at both intrinsic and extrinsic motivation [Maslow 1943; Skinnger 1954]. These theories were soon tested in various settings, including how they might apply to instruction. Since then, effects of student motivation on learning have been studied extensively. To make informed decisions regarding the design of a study measuring motivational effects, it is important to consider some of the critical literature and the various theories on motivation and learning, particularly theories that have been evaluated in post-secondary education research settings. This section presents an overview of more recent literature on motivation and learning, methodologies for measuring the relationship of motivation and learning, and how these methodologies have been previously used to gauge the impact of using robots for teaching students introductory programming. 3.1. Motivation and Learning

This section provides an overview of several singular theories and an overview of a combined theory as applied to engineering. 3.1.1. Singular Theories. Ralph Tyler [1971] noted four major parts to instructional design that attracted and held the learner’s interest. The first was to attract and ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:6

M. M. McGill

excite the learner. Next, the instructor needs to create tasks for the learner so that he or she can successfully complete the task and be satisfied with the outcome. This is followed by a reward and feedback system. Finally, the instructor needs to create opportunities to reinforce the learning of the task. Though Tyler was an early learning theorist, he recognized that each of these components were intertwined with motivation. Tyler’s third step, for example, the reward and feedback system, is related to Skinner’s operant conditioning theory [1954]. Operant conditioning is a behavior theory that emphasizes removing the learning activities from authentic situations so that the basic skills or tasks can be mastered. Tasks are broken down into small units and students are expected to learn those tasks. Learning is passive and reinforcement of the learning is encouraged through rewards and instructor feedback. These tasks are heavily instructor-oriented, rather than studentoriented, and the motivation to complete the tasks is highly extrinsic. Tyler’s first step, attracting and exciting the learner, is related to relatedness— instructional content to which students can relate. Theorists have made the case that culturally responsive teaching leads to higher student achievement [Bransford et al. 1999; Landson-Billings 1995; Wlodkowski and Ginsberg 2003], leading Wlodlkowski and Ginsberg to state that “motivationally effective teaching has to be culturally responsive teaching.” (p. 24) Uguroglu and Walberg [1979] found that as students become older, the relationship between motivation and learning increases, with the highest correlations found in the latter school years. Levin and Long [1981] found that people will spend more time on task, do so more intensely, are more cooperative, and are more open to experiences when they are motivated. Self-determination theory relates to an individual’s motivation [Deci and Ryan 1985; Ryan and Deci 2000]. The underlying belief for these theories is that humans have a psychological need for autonomy, competence, and relatedness and designing relevant instruction that enables learner autonomy and competence is inherently motivating. The theory has been used in both research and curriculum design in higher education [Gregg 2009; Levesque-Bristol and Stanek 2009; Turner et al. 2009]. This theory consists of five major sub-theories, including cognitive evaluation theory, organismic integration theory, causality orientations theory, basic psychological needs theory, and goal contents theory, with each theory exploring the aspects of intrinsic (engagement in a task because it is inherently interesting) or extrinsic motivation (engagement in task for external rewards or outcomes) or both. Heider’s [1958] theory of attribution was developed as a general theory stating that people’s personal attributes lead to their behavior, and later Weiner [1974] took that theory and applied it to education. Weiner states that the most important attributes for student achievement are ability, effort, task difficulty, and luck. He further defines internal and external factors, and controllable or uncontrollable factors, as also impacting motivation, while others have taken his work and applied it to research in higher education [Latta 1974; Schonwetter et al. 1994; Wambach 1993]. For example, ability (an internal, uncontrollable factor), effort (internal, controllable factor) and the difficulty level of the task (external, uncontrollable factor) all have an impact on learner motivation. Related to this is social cognitive theory, which states that behavioral patterns are set and maintained by individuals based on their environment, other people, and their behavior [Bandura 1997]. Intervention strategies can be used to change these behavioral patterns; therefore, designing instruction that changes the environment and the people around the learner can impact leaner behavior, with research in higher education exploring this model [Goto and Martin 2009; Mills et al. 2007]. Concepts associated with this theory include the individual’s environment, situation, behavioral capability, expectations, expectancies, self-control, observational learning, ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

Learning to Program with Personal Robots

4:7

reinforcements, self-efficacy, emotional coping responses, and reciprocal determinism [Glanz et al. 2002]. Another theory linked to intrinsic motivation is flow theory, which has four primary components: control, attention, curiosity, and intrinsic interest [Csikszentmihalyi 1975; Chan and Ahern 1999; Yonghiu 2010]. A key part of instructional design is identifying the trigger that stimulates an individual’s natural curiosity so that they become highly engaged at the task. Activity content becomes central to establishing and maintaining flow [Egbert 2003; Huang et al. 2010b]. The expectancy/value theory, proposed by Fishbein and Ajzen [1972], states that individual behavior is motivated by 1) the expectations that an individual has about whether or not he or she will be successful at the task coupled with 2) the perceived value of the task. To motivate learners, choosing tasks that have the largest combination of both perceived success and value will have the most impact. Vollmeyer and Rheinberg [2006] developed the cognitive-motivational process model consisting of four factors of initial motivation, probability of success of performing the task, anxiety (fear of failure), interest, and challenge. Challenge is defined as whether the learner wants to have success with the task and is derived from the expectancy-value model. The initial motivational factors are then used in conjunction with the functional state, which includes the levels of concentration and engagement. This is closely related to Flow theory. Two other popular theories, academic goal orientation and achievement goal theory, are both rooted in goals and both have been researched and studied in higher education settings [Elliott and Dweck 1988; Friedman and Mandel 2010; Roebken 2007]. Academic goal orientation has goals as motives, whereby “all actions are given meaning, direction, and purpose by the goals that individuals seek out, and that the quality and intensity of behavior will change as these goals change” [Covington 2000, p. 174]. Achievement goal theory is directly related to this and takes into consideration the belief that instructors can change student motivation (either positively or negatively) by emphasizing some goals over others [Covington 2000]. Similar to many of the theories above, though categorized differently, Keller proposed a framework for increasing student motivation based on four primary components of motivation: attention, relevance, confidence, and satisfaction [Keller 1987a; Keller 1987b; Small 1997]. These include primarily intrinsic aspects of motivation, but often as they relate to external stimuli such as a tool that has been introduced in an educational setting for the sole purpose of motivating students. 3.1.2. Combined Theories. Different theories of motivation in learning can also be blended. In the Guidebook on Conceptual Frameworks for Engineering Education, Svinicki [2010] proposes a combined theory of student motivation. She proposes that student motivation be classified in three categories (p. 19), carefully mapping those categories to different motivational theories. The first category states that student motivation is affected by the value of the targeted learning task. This task is influenced by six components, including how interesting and challenging the task is, whether or not the task is valued by their peers, the relationship of the task to the long-range goals of the learner, the task’s immediate usefulness or payoff, and how influential others see the task. This category maps to five theories, including expectancy/value, self-determination, behavior, social cognitive, and achievement goal orientation. The second category states that learner’s beliefs about who is controlling the outcome of the targeted task. This includes whether or not the outcome is under the control of the learner, whether or not the outcome of the task is likely to change given different circumstances, and if that change can be controlled. Svinicki maps this to attribution and self-determination theories. ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:8

M. M. McGill

The third category comprises learner’s self-efficacy (belief in one’s ability to succeed) for the task. This is influenced by past success at the same or similar tasks, having most if not all of the skills required by the task, being persuaded by someone else that success is possible, seeing someone else like themselves be successful, and how difficult the task is perceived to be. These are mapped to expectancy/value, self-determination, social cognitive, and achievement goal orientation theories. 3.1.3. Summary. This brief overview of motivational theories highlights many key indicators of motivation. Several key indicators appear in several different theories, including attention, the value the learner places on the task, and the perceived relevancy of the task. It would be difficult to argue that any one of the indicators of motivation presented in any of the singular theories does not influence motivation, particularly those indicators mentioned in multiple theories. In applying motivational theories to engineering education, Svinicki carefully combines many of these key indicators from various theories to create a guide for educators to incorporate motivational tasks into curriculum. 3.2. Measuring Motivation

Putting these theories into practice and then measuring their impact on student motivation is challenging for singular theories and even more so for combined. A good instrument for measuring motivation will be up-to-date, widely cited by other authors, peer reviewed, a good fit for the questions being researched, and contain accepted scales of measurement [Creswell 2008]. It will also have been shown to produce both valid and reliable scores. Many instruments meeting these requirements currently exist for measuring different types of motivation (Figure 1), each supporting various theories and some more general than others. The purpose of these instruments is to gain some measure of understanding of the state of mind of individuals and how this impacts their performance on assigned tasks. Several of the validated tools shown in Figure 1 are highlighted below. The Motives for Physical Activity Measure, for example, is a specific tool that has been designed to measure people’s motives for engaging in physical activity [Ryan et al. 1997]. The Intrinsic Motivation Inventory measures participants’ perceptions of their interest and enjoyment of a task or activity, as well as their competence, effort, usefulness, pressures felt performing the task and choices they have in the task [Ryan 1982]. The Achievement Motivation Inventory (AMI) looks at both intrinsic and extrinsic motivation and is based on the theory that many personality components drive performance [Schuler et al. 2004]. This tool measures 17 dimensions of motivation (consisting of 170 questions and measures) and was designed for the workplace. Though those tools measure motivation more generally, other tools are more specific to education. The Problems in School Questionnaire, for example, was developed to determine the ways in which teachers motivate students [Deci et al. 1981]. The Motivated Strategies for Learning Questionnaire (MSLQ) assesses both motivation and learning strategies of students for a course [Pintrich 1999; Pintrich and De Groot 1990]. Its motivation section assesses students’ goals and value beliefs, while the learning strategies section assesses students’ use of cognitive strategies and management of resources. The questionnaire measures both intrinsic and extrinsic components of motivation, as well as test anxiety, critical thinking, and time and study environments. The Student Motivation Scale has items in its scale that were adapted from Pintrich and DeGroot [Landry 2003]. This scale was designed to measure the strength of undergraduate student motivation and their persistence in completing their degree in face of obstacles and barriers and has been used to measure school students’ motivation on both a large and small scale [Martin 2003]. ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

Learning to Program with Personal Robots

4:9

Fig. 1. Validated instruments for measuring motivation.

The Academic Motivation Scale [Vallerand et al. 1992] consists of 28 items designed to measure three types of intrinsic and extrinsic motivation, as well as amotivation. Amotivation is the term used to describe students who lack any motivation [Fry et al. 2003]. These students feel they have little control over their lives, feel they are incompetent, and do not even know why they are studying at the university level. A more recent tool, the Questionnaire of Current Motivation, is designed to measure the initial motivational state of learners before the learning process begins and focuses on the four components of the cognitive-motivational process model: interest, challenge, anxiety, and probability of success [Rheinberg et al. 2001; Vollmeyer and Rheinberg ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:10

M. M. McGill

2006]. The Flow Short Scale (FKS) is used to explore the learners’ functional state and can be used as a follow-up to the QCM to measure ongoing motivational factors. A tool provided by Keller [1983, 1987a] investigates the effects of adding a material or tool to instruction with the specific intent of increasing motivation. As part of this work, Keller [1987b] developed a survey instrument, the Instructional Materials Motivation Survey (IMMS) that has been used extensively to evaluate the relationship between motivation and specific instructional materials. This instrument isolates the motivational affect of using a specific tool in the classroom [Keller 1987b; Rodgers and Withrow-Thorton 2005]. The tool can be adapted to the course environment so that the impact of specific tool or material is measured. After its widespread use for more than two decades, the survey was empirically tested for validity in 2006 [Huang et al. 2006]. The researchers discovered that only 20 of the original 36 Likert scale items were valid at the college-level setting. They concluded that the IMMS’s situational feature that adapts the survey to the learner environment is important to ensure usable results. Though none of these tools are specific to computer science, several can be and have been used in computing education research, including IMMS. Pittenger and Doering [2010], for example, used the IMMS to study the effects of motivational design for students taking an online self-study course. Zaini and Ahmad [2010] used the instrument to measure students’ motivation in learning mathematics using multimedia software. Huang et al. [2010a], researchers who validated the instrument, measured motivational processing and outcome processing of students in a digital game based learning environment. Many tools exist for measuring motivation, but not all tools meet the criteria for being valid and reliable survey instruments. Researchers can develop their own tools, but this requires knowledge about survey construction, scale development, length, and format [Creswell 2008]. Turning a tool into a validated, reliable instrument can take extensive skill and resources. With many validated survey instruments available, particularly to measure motivation, researchers can adapt these tools to capture data that they can be confident measures what they set out to measure and that can be generalized to the larger population [Creswell 2008]. 3.3. Measuring Motivation in Introductory Programming Courses

In several introductory programming studies, motivation is discussed and claims are made, but motivation is not defined and claims cannot be supported. In a study using games as a tool for learning programming, for example, Jiau et al. [2009] claimed that their instructional framework enhanced student motivation because 60% of students “had fun learning programming” and preferred their method over other assignments. In another study, students were asked about their motivation levels in a one-item Likert scale response [Kinnunen and Malmi 2008]. Barnes et al. [2008] explored the concept of improving motivation of CS1 students through digital game based learning techniques, however, motivation is undefined in this study and no empirical evidence is provided to support its claims. Bierre et al. [2006] reported on motivating objectoriented programming in an introductory programming course, but motivation is not defined or supported in the results. Other studies, however, have defined and examined aspects of motivation and learning in introductory programming courses more rigorously. The Computer Programming Self-Efficacy Scale is a validated instrument used to measure factors related to student success in introductory computer science courses by measuring confidence levels of students as they complete tasks and subtasks required by programmers, and the students’ persistence, self-organization, and regulation [Wiedenbeck 2005]. Self-efficacy in introductory programming courses for non-majors ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

Learning to Program with Personal Robots

4:11

was shown to be important and interventions can be successfully performed to increase self-efficacy during instruction. Twelve factors that might predict student success were examined by Wilson and Schrock [2001], including Weiner’s attributions for success/failure, self-efficacy, encouragement, and comfort level. The researchers created their own questionnaire for testing and also used the Computer Programming Self-Efficacy Scale and found that comfort level was the largest predictor of success, including anxiety levels, perceived course difficulties and difficulties completing assignments, and answering and asking questions in the classroom. Kinnunen and Simon [2010] used grounded-research in a qualitative study to define the emotional toll novices have in CS1 programming assignments. Students with positive self-efficacy avoided negative emotional experiences and they felt more confident and reassured by feedback from instructors emphasizing that tasks were learning experiences. Simon et al. [2006] acknowledged in their study that approaches to learning depend upon various factors including motivation, personal perception of task demands, classroom climate, and other environmental factors outside of the control of the learner. Part of this study uses the Biggs Study Process Questionnaire, which emphasizes that students’ perceptions and the assigned task/activities are central to learning, both of which relate to components of motivational theories. Roundtree et al. [2004] found that student expectations are positively correlated with passing the course, possibly relating to academic goal orientation theory and/or expectancy/value theory. From a phenomenographical study, design and methods of computer science instruction can be attributed to their views of students’ success, including their views of student behaviors and attitudes (hard work and persistence, interest level, positive attitude, and grade expectations) and intrinsic student attributes (gender and culture) [Kinnunen et al. 2007]. Another major study used four dimensions of motivation, control, curiosity, challenge, and confidence as a theoretical framework [Boyer et al. 2009]. Achievement goals orientation, autonomy (control), corrective feedback, and confidence as it relates to the subject domain of computer science were all evaluated in an effort to understand the learning process of students. The researchers concluded that instructional design may have a positive effect on learning and motivational outcomes such as self-confidence. Martins et al. [2010] used four instruments to define strategies that instructors can adapt to maximize learning and retention: Inventory of Attitudes and Study Behaviours, Course Interest Survey (based on Keller’s ARCS model), and the Student Motivation Problem Solving Questionnaire, and a self-efficacy test. Though this preliminary work was performed with only 15 participants who were graduate students in their second year of study, the model for the study encompasses several components of motivation from a variety of theories. Related to achievement goal theory and Dweck’s theories that instructors can manipulate the mindset of students to improve performance in introductory programming courses, Cutts et al. [2010] used two questionnaires to measure mindset, self-efficacy, and positive and negative affect. This was based on Dweck’s general mindset measure indicating a fixed or growth mindset of students. Several interventions were found to be effective, with the most effective intervention being raising awareness in students about their own mindsets through mindset training initiatives. Contextualing computer science instruction can increase the relevance of the material to students, thereby increasing retention [Cooper and Cunningham 2010; Guzdial 2008, 2010]. Other research on relatedness and relevance includes developing assignments that are engaging to the diverse student groups in a CS1 course for non-majors [Hundley and Pritt 2009], developing real-world programming assignments that are challenging and interesting to students [Stevenson and Wagner 2006], and making ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:12

M. M. McGill

assignments with practical or socially relevant content, which may be particularly appealing to women and minorities [Layman et al. 2007]. Two surveys have been created for measuring the attitude levels of students learning computer science [Hoegh and Moskal 2009; Weibe et al. 2003]. Though both are referred to as “attitude surveys”, they cover components of motivation described in many of the above theories. Both use the same five constructs for the basis of the survey: confidence, interest, gender, usefulness, and professional. Confidence-related questions were designed to measure a student’s confidence in their ability to learn computer science skills. Interest measured the student’s level of interest in computer science. Gender questions uncover a student’s perceptions of computing as a field for men. Usefulness measured the student’s beliefs on how useful computer science is. Finally, a student’s beliefs about computer science professionals is also measured. Gender has been an aspect of student success in many previous studies, primarily due to the disparity of men and women studying computer science [Barker et al. 2009; Byrne and Lyons 2001; Rich et al. 2004]. The impact of gender on success can be attributed to many indicators of motivation, including the classroom climate, social peer interactions, relevant assignments, and confidence in their ability to complete assignments/coursework. Besana and Dettori [2004] examined the use of peer and mentor support groups to strengthen confidence and decrease isolation in women studying computer science and determined. The initial study of this pilot program indicated some success. These rigorous studies have resulted in a deeper understanding of the different indicators of motivation that impact the learning process of novice programmers. Rather than build their own survey instruments, the majority of these studies used instruments that were previously validated and have been shown to reliably measure indicators of motivation. Computing education researchers can benefit from further use of such instruments in future studies with confidence that, when used properly, the results can provide meaningful data that can be generalized to the larger population. 3.4. Measuring Motivation in Early Courses Using Robots

Though there are many studies provided in Section 2 that explore the use of robots in early programming courses, there are few studies that specifically measure the motivation of using robots in early programming courses. One formal experimental study explored both quantitative and qualitative data to determine the effectiveness of robots in teaching computer science [Fagin and Merkle 2003]. Results indicated that overall scores dropped; however, the study did not measure motivation. Another study looked at motivation, retention, and learning using the CSBots in a pilot CS1 course [Lauwers et al. 2009]. Motivation is never defined in this study; however, the measurement techniques employed included data from surveys, retention and assignment completion statistics, and grades. It also collected student sources of frustration using the robots, since this pilot was designed to improve the robots for later use. No significant difference was found in retention, but students completed more assignments than in semesters without using the robots. Frustration levels were mostly related to concepts related to programming rather than the robots themselves. Students scored grades significantly higher than the previous five-year average, though their grades were nearly the same for one of the years that the students did not use robots, indicating that something other than the robots can also increase student grades. Another experimental study measured the motivational effects of using Lego Mindstorms in the classroom [McWhorter and O’Connor 2009]. This study used the Motivated Strategies for Learning Questionnaire (MSLQ) to measure value, expectancy, and test anxiety. Categories measured include intrinsic and extrinsic ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

Learning to Program with Personal Robots

4:13

goal orientation, task value, control of learning belief, self-efficacy, and test-anxiety. The quantitative data results did not demonstrate any motivational effects from using robots in the classroom. Apiola et al. [2010] include three components of self-determination theory in their study of a course designed to teach concepts to intermediate computer science students using Lego Mindstorms coupled with assignments designed to tap into students’ creativity. The three components measured were competence, autonomy, and relatedness and data was gathered through observation techniques and interviews with students. They found that robots piqued the curiosity and motivation of students initially and that students struggled with the technical problems related to the environment. 3.5. Summary

Motivation is a complex notion. Many theories attempt to define it, and survey instruments have been created and validated in an effort to capture aspects of motivation, particularly as they pertain to learners in classroom settings. Some of these have been applied to introductory computer science and programming courses, including CS0 and CS1 courses. Additionally, many previous studies claim that the use of robots motivates students, but much of the data is gathered through informal observation, anecdotes, and unqualified student feedback. The evidence is weak in many cases, and no convincing arguments are made that the instruments used isolate the effect of the robots on student motivation nor do they explore the various indicators of motivation such as relevance, expectations, confidence, and anxiety. There is significant room for additional studies that examine motivation of students in introductory programming courses that use robots as a teaching tool. 4. METHODOLOGY

The research questions for this study were as follows. — How does the use of robots throughout the course affect student motivation? — Is there a relationship between motivation levels of students learning programming with the robots and gender? — Is there a relationship between motivation levels of students learning programming with the robots and their technical self-perception? — Is there a relationship between motivation levels of students learning programming with the robots and their interest in development? This section details the learning environment, and then addresses the methodology in context of the research questions. 4.1. Learning Environment

Bradley University’s Department of Interactive Media offers three concentrations for majors: Animation and Visual Effects, Game Design, and Web and Application Design. Students have a variety of interests, including graphic design, sound design, time-based arts, and development. During their first year of studies, students in all concentrations are required to take the Introduction to Interactive Media Development course, which teaches introductory programming in the context of interactive media. The objectives of the course are three-fold: provide an immersed experience in introductory programming that include elements of design, introduce the development process used in creating interactive media projects, and promote an understanding of the intertwined roles of design and development. After researching several different environments that would support these goals, the author chose the IPRE robot as a tool for teaching programming. The robot was created by the IPRE as a hands-on tool ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:14

M. M. McGill

for introducing students to programming using the Python scripting language and the Myro package. It consists of the commercially available Scribbler robot and a custom add-on board called “Fluke”. The Fluke adds additional functionality to the Scribbler, including a low-resolution camera and additional sensors. The robot is controlled via a Bluetooth connection to a computer and can move, take pictures, sense light and obstacles, and play basic sounds. Students were able to check out the University purchased robots for use from a service desk that was open 75 hours per week. The course was taught as a one-hour lecture and a two-hour lab for 15 weeks. The instructor was available during labs to answer questions and to assist with any technical difficulties with the robots. In addition to quizzes and exams, the course consisted of 10 labs completed during the lab session, 10 post lab assignments completed by students outside of class, and two major projects. Six of the labs and six of the postlabs as well as the two major projects required students to program the robots. Almost every assignment and both projects required the robot to react/interact with environmental stimuli or with the user. The first lab and postlab introduced students to classroom procedures and basic robot control, including turning right, turning left, moving forward, and moving backward. The second lab introduced slightly more complex combinations of movements, requiring students to program their robots to draw triangles, squares, and circles. The second postlab required students to create and post a video of their robot dancing to music. The third lab introduced students to repetitive movements using loops and the accompanying postlab required students to move their robot in a pattern provided to them. The remaining labs and postlabs required students to program their robots to take pictures when the robot sensed an object and there was sufficient ambient light. The midterm project was a major assignment designed to incorporate commands and design elements discussed in the first half of the semester. Students created an interactive, creative production that moves a robot around a 10-foot by 10-foot boxed in area and that captured the attention of the audience. Their production could be themed on a story (like the Three Little Pigs) or a game. Their program had to make use of a minimum of five functions and had to make use of a random number generator, if statements, loops, variables, and expressions. The robot’s sensors had to be used as well as the ability for the robot to take a picture. The final project was designed to be more complex and to incorporate additional sensor commands and a graphical user interface. It required students to use two or more robots to create another interactive, creative production. This production had to use sounds and props, functions with parameters and return statements, loops, and sensors. Sensors, in fact, were of particular emphasis of this project. The robots were required to interact with the user or with each other. Students were required to demonstrate their project in class and to videotape and post their video to youtube. 4.2. Research Study

The research questions were explored using descriptive research methods in a pre-experimental design [Creswell 2008; Stanley and Campbell 1963]. A pre- and post-survey was disseminated to determine participants’ motivation using robots in an introductory, CS0 programming course taught to non-computer science majors. The basis for the survey was the Instructional Materials Motivation Survey [Keller 1987b]. This instrument was chosen over others since it was designed to measure whether a media device or tool introduced during instruction for the specific intent of motivating students actually does. The adaptation of this survey is grounded in the work of Huang et al. [2006] to improve reliability and validity in the results. The survey was structured to measure ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

Learning to Program with Personal Robots

4:15

four components of motivation: Attention, Relevance, Confidence, and Satisfaction (ARCS). There were several modifications made to the questions in each component. For Attention, of the 11 questions posed in the original validated instrument., three of those questions did not apply in context to robots. For Confidence, one of the five questions posed in the validated instrument did not apply. In total, 16 questions were posed that were directly adapted from the validated IMMS model. Additionally, eight questions about relatedness, confidence, and satisfaction were added to the survey. All of the questions on the survey related to ARCS are referred to as dependent variables throughout the remainder of this article. Participants in the study were students enrolled in an Introduction to Interactive Media Development course in the Fall and Spring semesters of the 2009-10 academic year. The participants were Interactive Media majors or minors with an interest in the visual design aspects of interactive media. Demographic data on the participants was collected, including gender, whether the participant viewed themselves as “technical” or not, and their areas of interests (design, development, audio, etc) in interactive media. This data is referred to as the independent variables throughout the remainder of this article. The pre-course survey, given to participants on the first day of class, consisted of demographic data, ten Likert scale questions to measure motivation, and two open-ended questions. Seven questions measured three of the ARCS levels: attention, confidence, and relevance, as grouped in the study by Huang et al. [2006]. Curiosity was one of the eight items measuring the Attention component. Satisfaction was not measured as those questions related only to how satisfied participants were after using the robots. Three questions were created and added to measure attitudes and technical perception toward programming and robots. The post-course survey, given to participants on the last day of class, consisted of 26 Likert scale questions and four open-ended questions. Twenty-four questions were designed to measure one of the four ARCS components with respect to robots. One question measured the participant’s technical self-perception (demographic data), and one question was a duplicate used as a measure of reliability. Confidence levels were measured for programming. Likert scale questions consisted of a 1 to 5 scale (5 = Strongly Agree, 4 = Agree, 3 = Neutral, 2 = Disagree, and 1 = Strongly Disagree). To address the first research question, descriptive statistics were calculated for each of the Likert scale responses. For paired questions on the pre-and post-course surveys, a one-sample t-test was performed. The responses to several of the open-ended questions were coded as either positive, neutral, or negative. Some responses had multiple statements, and each statement was categorized individually. Two of the questions were prompts to see what other thoughts participants might have about the robots. To address the remaining research question, one-way Analysis of Variance (ANOVA) was conducted to assess whether means on each individual dependent variable were significantly different among the groups. For significant ANOVA values, follow-up tests were conducted consisting of an independent-samples t-test for those independent variables with two levels (gender, most interested in development, and least interested in development) and pairwise comparisons for the one with three levels (technical self-perception) using Dunnett’s C to compare group means. A Multivariate Analysis of Variance (MANOVA) was conducted to assess whether the means of the set of dependent variables that comprise each ARCS component vary across the independent variables. Another MANOVA was then conducted to assess whether the means of the entire set of dependent variables that comprise motivation vary across the independent variables. The researcher’s institutional review board approved the research study. Since the instructor for this course was also the researcher of the study, the instructor asked ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:16

M. M. McGill Table I. Technical Self-Perception Frequency Pre-Course Post-Course Strongly agree to view of self as technical

3

8

Agree to view of self as technical

18

21

Do not view self as technical

14

3

Table II. Participants Interactive Media Interests Pre-Course Interest (n = 35) Most Least Design (art, graphic design, typography, visual appeal) Development (interactive projects, coding/programming, and software development)

Post-Course Interest (n = 32) Most Least

30

1

27

1

8

15

9

13

Time-based arts (videography and/or cinema, animation)

21

8

17

7

Production

10

7

9

6

Sound/audio design

11

6

10

3

other willing researchers to disseminate the survey to the participants. This was done to reduce any bias in the participants’ answers that may occur if they believed their answers would affect their course grade and to help maintain confidentiality. All data was entered into and analyzed using SPSS. 5. RESULTS

Descriptive statistics and an independent samples t-test were used to analyze the participant data and the responses to the Likert scale questions. ANOVA and MANOVA were used to evaluate relationships between the dependent and independent variables. The remainder of this section describes the results. 5.1. Participant Demographic Data

The courses had an overall retention rate of 91%. 35 participants completed the precourse survey (13 female and 22 male), while 32 participants completed the survey (13 female and 19 male). The Likert question related to self-perception stated “I view myself as a technical person.” The word “technical” was undefined and was open to interpretation by the respondent. Three groups were formed from the data: respondents who strongly agreed, those who agreed, and those who were either neutral or disagreed. Table I shows the responses to this question. Prior to the course, 21 participants (60%) agreed or strongly agreed that they viewed themselves as technical. At the end of the course, 29 participants (90%) did. Participants were asked to select all areas of interactive media in which they were most interested (Table II). The interests of the participants varied significantly. The majority of participants (30, 86%) were most interested in design aspects. Twenty-one (60%) were interested in time-based arts. Eight (23%) were interested in development, while 10 (29%) were interested in production and 11 (31%) in sound/audio design. Participants were asked to select all of the areas of interactive media in which they were least interested. 15 (43%) indicated that they were least interested in development. 8 (23%) indicated they were least interested in time-based arts, 7 (20%) in production, and 6 (18%) in sound/audio design. Only 1 participant (3%) indicated they were least interested in design. After the course ended, the majority of the ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

Learning to Program with Personal Robots

4:17

Table III. Attitudes and Perception Mean

SD

I am looking forward to learning how to program.

3.50

0.96

I am looking forward to learning how to program with robots.

4.14

0.77

Learning to program with robots seems very technical to me.

3.60

0.88

I view myself as a technical person.

3.63

0.73

participants’ interests changed only slightly. The most change occurred in the percentage of students whose interests in time-based arts decreased, 21 (60%) to 17 (54%). 5.2. Motivation Responses

This section discusses the results from the motivational responses, including participant motivation and relationships between dependent and independent variables. 5.2.1. Attitude and Perception. Several statements relating to attitudes and perceptions were posed in the pre-course survey to determine how participants felt about programming, robots, and themselves. The results are shown in Table III. Participants agreed much more strongly that they looked forward to learning how to program with robots over just learning to program. To test statistically, a singlesample t-test was performed comparing the mean for looking forward to programming with robots against the mean (constant) for looking forward to learning how to program. Results were significant (t(34) = 4.924, p = .00). 5.2.2. Motivation Measures. Descriptive statistics were used to answer the first research question, ”How does the use of robots throughout the course affect student motivation?”. The mean and standard deviation for each of the ARCS related statements (dependent variables) are presented in Table IV. Questions are grouped according to the modified IMMS categorization for the four IMMS components that collectively measure motivation. Table V provides a list of additional questions in the survey. Pre-Course Survey

The open-ended questions reflected a variety of responses. The question “What are your thoughts and/or feelings about learning how to program?” resulted in 23 positive comments, 6 neutral, and 14 negative. Positive responses included statements such as “It seems interesting to me,” “Excited to learn programming,” and “Open-minded.” Negative responses included statements such as “Nervous about programming,” and “It’s rather intimidating.” The question “What are your thoughts and/or feelings about using robots in this course?” resulted in an overwhelming 31 positive comments, 1 neutral, and 5 negative. Positive responses included “Wow, that sounds awesome, let’s do it,” “Curious to see how it all works,” and “Once I heard we were using robots, I thought the course may be a lot more interesting than my initial feelings towards programming.” Negative responses included “I think it is intimidating,” and “I am a little nervous”. Post-Course Survey

The open-ended question “What are your thoughts and/or feelings about learning how to program?” resulted in 20 positive comments, 4 neutral, and 15 negative. Positive comments included “Even though it is intimidating at first, it made me excited to do more coding in the future.” Negative comments included “It’s not really my area.” The open-ended question “What are your thoughts and/or feelings about using robots in this course?” resulted in 23 positive comments, 1 neutral, and 14 negative. ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:18

M. M. McGill Table IV. ARCS Survey Results M

Pre SD

M

Post SD

t-test p

3.83

0.74

4.03

0.60

0.23

3.66

0.79

Attention There is something interesting that got my attention about learning how to program with the robot. The way the robot was used to teach programming helped keep my attention. Learning how to program with the robot stimulated my curiosity.

3.59

0.71

The work that we did learning how to program with the robot held my attention.

3.59

0.91

The variety of uses of the robot helped keep my attention.

3.56

0.98

Using the robot to learn how to program made me feel rewarded for my effort.

3.50

0.84

It was a pleasure to work with the robot to learn how to program.

3.19

0.90

I enjoyed using the robot to learn how to program so much that I would like to know more about it.

2.84

.081

Relevance Learning how to program with the robot will be (was) useful for me to learn how to program effectively.

3.83

0.66

3.47

0.84

0.06

It is clear to me how using robots is related to learning how to program.

3.60

0.88

3.97

0.82

0.08

3.22

0.94

3.84

0.72

There are sufficient stories, materials, and examples that showed me how the robot could be important to some people who are learning how to program. Confidence I could understand quite a bit of the material involved with learning how to program with the robot. The programming exercises with the robot were too difficult.

2.66

0.87

Learning how to program with the robot was easy and it was easy to remember the important points about using it.

2.31

0.93

Learning how to program with the robot was so abstract that it was hard to keep my attention on it.

2.25

0.84

2.47

0.76

Satisfaction The amount of repetition using the robot to learn how to program caused me to be bored.

Positive comments ranged from “They really helped me learn,” “The robots were a great way to learn basic programming! It was my first time and it was a lot of fun,” and “It made the course fun and interesting.” Negative comments included “They hindered our learning experience,” and “I felt it was unnecessary.” The questions, “What are your thoughts and/or feelings about the content of this course?” and “What are your thoughts and/or feelings about how this course can be improved?” resulted in a variety of responses from individual participants. These two questions were designed to be generic prompts to illicit comments about using robots to learn how to program. Feedback ranged from “I want a robot that can shoot lasers, teach us that!” to “The content was great, I just wish we had robots that worked more efficiently,” and “More reliable robots.” 5.2.3. Relationships. The remaining research question asked whether there is a relationship between gender, technical self-perception, or interest in development and the participant responses. Since there was no statistically significant difference in the ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

Learning to Program with Personal Robots

4:19

Table V. Additional Survey Results Pre

Post

M

SD

3.74

0.61

M

SD

t-test p 0.59

Relevance I see the usefulness of using robots to learn how to program. I could relate learning how to program with the robot to things I have seen, done, or thought about in my own life.

3.84

0.88

3.44

0.91

2.84

1.05

2.84

1.05

3.33

1.08

3.12

1.24

2.78

1.24

3.41

1.13

Confidence Given that this course will teach me how to program with robots, I am confident that I will complete this course successfully.

3.28

1.15

Learning how to program with the robot was more difficult to understand than I would like for it to be. Using the robots to learn how to program is (was) intimidating to me.

2.77

1.03

0.78

Satisfaction I enjoyed learning how to program with the robot. Future Course Work I am looking forward to additional courses that require programming. I am intimidated by the thought of having to learn (more) programming. I am confident that I will succeed in additional courses that require programming.

2.60

1.01

0.51

results of the pre- and post-course surveys ( p < .05), the data from the post-course survey were used for these calculations. Internal consistency was measured using Cronbach’s alpha was 0.74 for the 16 IMMS related items and Cronbach’s alpha of the entire 23 post-survey questions was 0.77, indicating satisfactory reliability for both. ANOVA and MANOVA were used to determine the statistically significant impact of the independent variables on each of the dependent variables (including the eight additional questions), on each of the individual components of ARCS, and on the entire set of variables that compose motivation. Significance level for all tests is .05, unless otherwise specified. To test whether there was a relationship between the independent variables (gender, technical self-perception, and most and least interested in development) and the results of the survey (dependent variables), a one-way ANOVA was run for each of the 24 dependent variables against each of the four independent variables, for a total of 96 tests. Only significant results are reported in this section. Gender

Three significant relationships between gender and the dependent variables were found through ANOVA. For the first, “Learning how to program with the robot was useful for me to learn how to program effectively,” the ANOVA was F(1,30) = 4.96, p = .03. The strength of the relationship between gender and this statement, as assessed by η2 , was moderate, with gender accounting for 14% of the variance of the dependent variable. An independent samples t-test was conducted as a follow-up test and was found to be significant, t(30) = −2.23, p = .03. The female respondents believed more strongly that using the robots to learn how to program helped them (M = 3.85, SD = .69), while the male respondents believed less strongly (M = 3.21, SD = .86). ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:20

M. M. McGill

The second dependent variable, “Using the robots to learn how to program was intimidating to me,” had an ANOVA of F(1,30) = 6.91, p = .01. The strength of the relationship between gender and the responses to this statement was moderate, with gender accounting for 19% of the variance of the dependent variable. Female respondents felt slightly intimidated about using the robots to learn how to program (M = 3.38, SD = .96), while male respondents did not (M = 2.47, SD = .96). The t-test resulted in t(30) = −2.63, p = .01. The third dependent variable, “It was a pleasure to work with the robot to learn how to program,” had an ANOVA of F(1,30) = 8.68, p = .01. The strength of the relationship between gender and the responses to this statement was moderate, with gender accounting for 22% of the variance of the dependent variable. Female respondents generally agreed with this statement (M = 3.69, SD = .63), while male respondents did not (M = 2.84, SD = .90). The t-test resulted in t(30) = −2.95, p = .01. Technical Self-perception

There were no correlations found between any of the dependent variables and the respondents’ technical self-perception. Most Interested in Development

Only one significant relationship exists between the respondents who were most interested in development and the dependent variables. For the dependent variable, “I am looking forward to additional courses that require programming,” the significance of the ANOVA was F(1,30) = 7.59, p = .01. The strength of the relationship between these two variables, as assessed by η2 , was moderate, with most interested accounting for 20% of the variance of the dependent variable. The independent samples t-test resulted in t(30) = −2.75, p = .01. This indicates that respondents who were most interested in development looked forward to additional courses that require programming (M = 4.0, SD = 1.23), while those who did not indicate that they were most interested in development did not (M = 2.78, SD = 1.09). Least Interested in Development

Significant relationships exist between the respondents who were least interested in development and six of the dependent variables. For the first dependent variable, “The work that we did learning how to program with the robot held my attention,” the ANOVA was significant, F(1,30) = 5.17, p = .03. The strength of the relationship, as assessed by η2 , was moderate, with least interested accounting for 15% of the variance of the dependent variable. Those who were least interested in development had a lower response to this statement (M = 3.25, SD = .86) than other respondents (M = 3.94, SD = .86), t(30) = 2.27, p = .03. The ANOVA for “I enjoyed learning how to program with the robot” was F(1,30) = 5.00, p = .03. The strength of the relationship was weak, with least interested accounting for 14% of the variance of the dependent variable. Those least interested in development were less likely to enjoy their experience with the robot (M = 3.00, SD = .89) than those who were not (M = 3.75, SD = 1.00), t(30) = 2.24, p = .03. The ANOVA for “Learning to program with the robot stimulated my curiosity” was F(1,30) = 5.76, p = .02. The strength of the relationship was moderate, with least interested accounting for 16% of the variance of the dependent variable. Those least interested in development were less likely to agree that the robot stimulated their curiosity (M = 3.31, SD = .79) over those who were not (M = 3.88, SD = .50), t(30) = 2.40, p = .02. ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

Learning to Program with Personal Robots

4:21

The ANOVA for “The variety of uses of the robot helped keep my attention” was F(1,30) = 5.32, p = .03. The strength of the relationship was moderate, with least interested accounting for 15% of the variance of the dependent variable. Those least interested in development were neutral towards this statement, (M = 3.19, SD = .91), while others agreed that the robots helped keep their attention (M = 3.94, SD = .93), t(30) = 2.31, p = .03. The ANOVA for “I am looking forward to additional courses that require programming” was F(1,30) = 6.08, p = .02. The strength of the relationship was moderate, with those least interested in development accounting for 17% of the variance of the dependent variable. Respondents who were least interested in development were not looking forward to additional programming courses (M = 2.62, SD = .96) while others were (M = 3.62, SD = 1.31), t(30) = 2.47, p = .02. The ANOVA for “I could understand quite a bit of the material involved with learning how to program with the robot” was F(1,30) = 5.55, p = .03. The strength of the relationship was moderate, with least interested accounting for 16% of the variance of the dependent variable. Those least interested in development did not agree as strongly with this statement (M = 3.56, SD = .81) than those who were not (M = 4.12, SD = .50), t(30) = 2.36, p = .03. ARCS Components

To test whether the independent variables influenced any of the components of ARCS or the entire motivational scale, One-Way Multivariate Analysis of Variance (MANOVA) was used individually on each of the independent variables, gender, technical self-perception and most and least interested in development against each of the ARCS components: Attention, Relevance, Confidence, and Satisfaction. Since the revised IMMS model only has one question related to Satisfaction, the MANOVA could not be performed for this component. The MANOVA results (Table VI) indicate that there is only one significant relationship ( p < 0.05) between the independent variables (gender, technical self-perception, and interest in development) and the four components of the ARCS data, as well as with the entire set of ARCS components that comprise motivation. There is a significant relationship ( p = 0.04) between Gender and the entire set of ARCS components. A traditional Bonferroni procedure was used to control Type I error for each ANOVA. Each was tested at a .003 significance level (.05 divided by the number of ANOVAs (16)). When the significance values for each ANOVA were evaluated, all were above .003, indicating no significant relationship between Gender and the entire set of ARCS components. 6. ANALYSIS

The data is analyzed in context to the research questions. The first section addresses how the use of robots throughout the course affected student motivation. The second section addresses the relationships between this motivation and gender, technical selfperception, and interest in development. 6.1. ARCS Components

This section summarizes the findings based on the data grouped in each of the ARCS components. 6.1.1. Attention. As a component of motivation, the data indicates that the robots had a positive influence on piquing and maintaining the attention of the participants. The majority of the values for the Attention component means ranged from 3.5 to 4.03 ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:22

M. M. McGill Table VI. MANOVA Results Attention

Relevance

Confidence

Satisfaction

All

Gender

Wilks λ F p

0.57 F(8,23) = 2.15 0.07

0.80 F(3,28) = 2.40 0.09

0.89 F(4,27) = 0.86 0.50

n/a n/a n/a

0.27 F(16,15) = 2.51 0.04

Technical

Wilks λ F p

0.65 F(16,44) = .66 0.44

0.81 F(6,54) = 1.02 0.42

0.59 F(8,52) = 1.98 0.07

n/a n/a n/a

0.24 F(32,28) = 0.92 0.60

Most Interested

Wilks λ F p

0.74 F(8,23) = 1.03 0.44

0.94 F(3,28) = 0.61 0.62

0.76 F(4,27) = 2.09 0.11

n/a n/a n/a

0.43 F(15,16) = 1.26 0.33

Least Interested

Wilks λ F p

0.73 F(8,23) = 1.07 0.42

0.92 F(3,28) = 0.78 0.52

0.73 F(4,27) = 2.56 0.06

n/a n/a n/a

0.38 F(15,16) = 1.55 0.20

for this component. Participants looked forward to learning how to program with the robots at the beginning of the course (M = 4.14, SD = 0.77), which was significantly higher (p = .00) than the counterpart question asking if they were looking forward to learn how to program (M = 3.5, SD = 0.96). Open-ended comments support this as well. There were 23 positive, 6 neutral, and 14 negative comments about learning to program, while there were 31 positive, 1 neutral, and 5 negative comments about using robots in the course. This indicates that the use of robots could be an effective tool when recruiting students into introductory programming courses. 6.1.2. Relevance. Overall, the data indicates that participants perceived that the robots were only marginally relevant to them. The response means from the three questions in this component ranged from 3.22 to 3.97, while the question with the lowest mean had more to do with instructional design (“There are sufficient stories, materials, and examples that showed me how the robot could be important to some people who are learning how to program.”) Two additional questions about relevancy added to the modified IMMS, “I see the usefulness of using robots to learn how to program” (M = 3.84, SD = 0.88) and “I could relate learning how to program with the robots to things I have seen, done, or thought about in my own life” (M = 3.44, SD = 0.91) show only a slightly positive result. Supporting statements included “I completely understand why [the robots] were useful” in the course, “I need to know it for my future job,” and the “interactiveness of the robots fit in with the major,” while two other participants noted that they thought the addition of the robots was “unnecessary.” This latter statement supports a nearly significant finding (p = 0.06) that might indicate that students post-course agreed less strongly with the statement “Learning how to program with the robot will be (was) useful for me to learn how to program effectively” (M = 3.47, SD = 0.84) than precourse (M = 3.83, SD = 0.66). 6.1.3. Confidence. Overall, confidence responses indicated that participants had only slightly positive confidence levels. They agreed that they could understand quite a bit of the material involved with learning how to program the robot (M = 3.84, SD = 0.72) . Overall, they did not think that the programming exercises with the robot were difficult (M = 2.66, SD = 0.87). They did not think that learning how to program with the robot was easy and it wasn’t always easy to remember the important points about how to use it (M = 2.31, SD = 0.93). They disagreed with the statement “Learning how to program with the robot was so abstract that it was hard to keep my attention” (M = ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

Learning to Program with Personal Robots

4:23

2.25, SD = 0.84), indicating that respondents may have viewed the robots as a concrete tool for programming. The six additional questions added to gain a better understanding of confidence indicated that participants were only slightly confident that they would succeed in additional programming courses (M = 3.42, SD = 1.13). Participants slightly disagreed that they were intimidated by the thought of learning more programming (M = 2.78, SD = 1.24) and they were neutral about looking forward to additional courses that require programming (M = 3.12, SD = 1.24). Despite their neutrality, by the end of this course, 90% of the participants viewed themselves as much more technical than at the start of the course (60%). 6.1.4. Satisfaction. Satisfaction levels were mostly neutral overall. After the completion of the course, participants were primarily neutral in their enjoyment of learning how to program with the robot (M = 3.33, SD = 1.08). However, the open-ended questions indicated that a majority of participants enjoyed using the robots. In fact, the negative comments reflected only dissatisfaction with specific issues that evolved while using the robots with a Mac-based development environment (“I just wish we had robots that worked more efficiently.”), not in the use of robots in the course. One respondent stated that “the robots were a great way to learn basic programming! It was my first time and it was a lot of fun.” Others thought that the robots “hindered our learning experience.” Another stated that “Robots = Excellent; Scribbler robots + myro = terrible; the course could be greatly improved with a better robot/API combo” and “The robots are a good idea. . . the only issue is that the robots sensors and myro didn’t work effectively.” 6.2. Relationships

The MANOVA results reveal that there is no significant relationship (p < .05) between any and all of the four ARCS components and the independent variables. However, the results of the ANOVA tests found ten significant relationships between three of the independent variables and individual dependent variables. Six relationships were found between respondents who indicated that they were least interested in development and dependent variables. Those who were least interested in development were less likely to have their attention held and kept by the use of the robots, they enjoyed working with the robots less, and were less likely to state that the robot stimulated their curiosity. They do not look forward to additional courses that require programming and they were not as likely to state that they understood the material as others. The ANOVA tests found relationships between gender and three dependent variables. Female respondents stated that they were slightly intimidated by using the robots to learn how to program, more so than the male respondents. Even though the female respondents were more intimidated, however, they also stated more strongly that using the robots helped them to learn how to program and that it was a pleasure to work with the robot. The final relationship was between the respondents most interested in development and one dependent variable. Those most interested in development were looking forward to additional courses that require programming, while those who were not most interested in development were not. 7. DISCUSSION

This section discusses the significant findings of this study, along with other results that may be worthy of further investigation. It also frames these findings in the context of previously published research on motivation. ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:24

M. M. McGill

7.1. Significant Findings

Three significant results can be concluded from this study related to participants who are least interested in development, gender, and technical issues. 7.1.1. Attention, Relevance, Confidence, and Satisfaction. Overall, the results of this study are similar to those found in studies on by McWhorter and O’Connor [2009] as well as Fagin and Merkle [2003]. The quantitative data in both of these previous studies was inconclusive, but the qualitative data indicated that robots in the classroom motivate students by sparking their interest. It is clear that the idea of programming with the robots captured the attention of the participants much more than programming alone. The comment from one participant precisely defines the results of the items on the survey: “Once I heard we were using robots, I thought the course may be a lot more interesting than my initial feelings towards programming”. As discussed in Section 3, attention is a key point in many motivational theories [Csikszentmihalyi 1975; Keller 1987b; Rheinberg et al. 2001; Tyler 1971]. Tyler notes that the first step of instructional design should be to attract and excite the learner. Flow theory also has two primary components related to this, attention and curiosity. Svinicki’s [2010] first category of student motivation relates to value of the task, but within the value category she includes the interest level of the task. The results of the Attention component of this study, including the additional questions asked related to attention, are consistent with findings of an intrinsic motivation study of the Lego Mindstorms for intermediate CS students. Apiola et al. [2010] concluded that “... robots seemed to work as a powerful trigger of the initial curiosity and motivation of students” (p. 202). With attention being higher pre-course, robots may be an effective strategy for recruiting students. Though the Attention component is a positive result of this study, the impact of robots on the three other areas of motivation, relevance, confidence, and satisfaction, was either only slightly positive or neutral. Relevance or relatedness is mentioned in several other motivational theories, even more so than attention, and has been the focus of much recent discussion and studies in introductory programming courses [Bransford et al. 1999; Cooper and Cunningham 2010; Guzdial 2010; Stevenson and Wagner 2006; Wlodkowski and Ginsberg 2003]. Self-determination theory, expectancy/value theory, and academic goal orientation all contain some aspect of relatedness. In this study, however, participants could not see the relevance of learning programming using the robots (both pre-course and potentially even more so postcourse) and could not relate this experience with their everyday lives. Confidence levels of participants were very neutral. Confidence is found in expectancy/value theory, theory of attribution, self-efficacy, and self-determination theory. Svinicki’s [2010] blended model devotes one entire category to the learner’s self-efficacy, due to its important role in motivation. It is a major part of motivation, yet the addition of robots in this course did not instill great amounts of confidence in the students. Keller makes the case that satisfied learners are more likely to be motivated to continue learning, since they see value in what they are doing [1987]. Satisfaction, then, maps back to expectancy/value theory, self-determination theory, social cognitive theory, and academic goal orientation and achievement goal theory. However, results indicated that students were only slightly satisfied using the robots. This study indicates that other than increasing attention initially, there is no clear positive effect on motivation when using robots in an introductory programming course designed for non-majors. Overall, previous studies claiming significant gains in motivation from using the robots in introductory courses should be viewed somewhat skeptically. The motivational effect seems to be smaller overall than hoped by many instructors. ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

Learning to Program with Personal Robots

4:25

7.1.2. Least Interested in Development. The results indicate that those participants least interested in development were not as interested in the robots and did not feel that the robots kept their attention nor stimulated their curiosity as strongly as others. They were neutral about their enjoyment in using the robots. They were not looking forward to additional programming courses, even though they felt that they were able to understand some of the material using the robots. This indicates that the use of the IPRE robots mostly benefits students already interested in the subject and that robots are not successful in changing the minds of those students who enter the course with a low level of interest. 7.1.3. Gender. Though gender did not affect overall motivation, or individual components of the ARCS model, female participants were more intimidated by the use of robots more than males, but then expressed that they enjoyed using the IPRE robots more. They also expressed a feeling that it helped them learn how to program. Female participants were able to overcome their intimidation and possibly may have felt some accomplishment in overcoming their fears. This may be worthy of further investigation, as using the robots may give a sense of empowerment to women and girls and may give them more confidence in learning additional programming. 7.1.4. Other Issues. While robots in general were viewed positively, issues with the IPRE robots in particular contributed to participants’ frustrations, which may reduce the robots’ ability to serve as a motivating tool. The technical issues participants noted include difficulties with sensors, low battery life, problems with the robot and the Tkinter package, problems with the IDLE editor on the Mac OS, and issues related to Bluetooth connections. Additionally, the participants noted that the gamepads would not work with the lab’s Mac OS computers, and versioning issues with the robot software on Mac OS computers prevented participants from successfully installing and using the software on their (Mac) laptops. This was a significant issue in the class, since participants all own Mac OS laptops. To reduce costs for participants, the University provided the robots, and they were kept in a secure lab area that was only open certain hours during the week. Participants noted that they would have liked to take robots home with them so they could spend more time completing their assignments. The open-ended questions seem to indicate that technical problems with the IPRE robots may have neutralized participant satisfaction levels. The technical problems encountered in this course with this particular type of robot are not unique—similar problems are common with other robot platforms as well. Technical problems with the IPRE robots were mentioned in a study by Markham and King, who classified the robots as “experimental hardware” [2010, p. 207]. Apiola et al. [2010] also mention problems students using the Lego Mindstorms encountered working in an “unstable environment” [2010, p. 202]. Part of successfully motivating learners in flow theory is to design instruction so that students are intrinsically immersed in their learning task. However, the constant issues prevented students from maintaining this flow. Svinicki [2010] has an entire category of motivation devoted to the learner’s beliefs about controlling the outcome of the task. The technical problems may be related to the somewhat neutral satisfaction levels. With theories proposing that motivation increases when the learner has more control over the task, it is important then to minimize these technical difficulties. The robots may have had a more motivational effect on participants, including those who were least interested in development, had these problems not arisen. For instructors considering the use of robots in their teaching, this means that the potential technical problems should be seriously considered since they can easily negate any potential positive motivational effect. ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:26

M. M. McGill

7.2. Threats to Validity

Though care was taken to minimize them, the threats to validity in this article include both internal and external threats. Additionally, there are other threats that are mentioned that include the nature of the design of the study, specifically, the lack of a control group, as well as how the technical problems in the class may have impacted the study. Internal and external validity threats are mapped to the definitions provided by Stanley and Campbell [1963]. 7.2.1. Internal and External Validity Threats. The internal validity threats include maturation, instrumentation, and experimental mortality. External validity includes reactive/interaction effect of testing. This course was a 15-week course and students were given a survey at the start of the course and one at the end. Students may have matured in various ways throughout the 15 weeks that may not be related to the robots. Additionally, though the structure and delivery of the course intentionally remained the same across the two semesters, the instructor was slightly more experienced with the technical issues that could arise during the second semester. Part of the nature of the IMMS is that researchers must contextualize the instrument to target the particular intervention to be evaluated for motivation. Although the wording of the items was carefully crafted, there is the possibility that items on the survey may not be measuring what they have been designed to measure. The retention rate for the course was 91%. Students who dropped the course may have been sufficiently discouraged or bored with the robot. Once these students dropped, they were no longer participants in this study and did not complete the final survey. Finally, the pre-course survey may have biased participants’ views of the course. It may have planted the idea that robots are interesting and even make students more curious. 7.2.2. Other Threats. The nature of the IMMS makes it a survey that is given at the end of the course. Many questions are reflective in nature, such as ”It was a pleasure to work with the robot to learn how to program.” Though this is the case, a pre-course survey was created with 12 questions to collect a subset of data to capture the state of motivation of students at the start of the course. Due to the limited number of students enrolled in the course, there is a lack of a control group. A control group that used the same software environment, but no robots or other hardware, could provide additional point of reference. Typical educational research variables may play an affect on the outcomes as well, including student and instructor variables. These range from teacher experience to student attitudes and beliefs and even environmental variables like the condition of classroom. In this instance, the researcher also served as the instructor. Care was taken not to let biases interfere with instruction, since the researcher was motivated to learn whether or not it was worthwhile to continue investing time and resources in sustaining the robots for use in the course. Additionally, the instructor asked other willing researchers to come to class and disseminate the survey to students in this course, then hold on to those surveys in a secure place until final grades were assigned. Participants were informed of this arrangement. This was done to help reduce any bias in students’ answers that may occur if they believed their answers would affect their course grade and also to help maintain confidentiality. However, the largest threat to this study is the nature and number of technical problems that students encountered throughout the course. In the best case, these ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

Learning to Program with Personal Robots

4:27

technical issues did not interfere with the study. However, the results may have been different had the technical problems been more minimal. 7.3. Future Work

In brief, the following studies may further enhance and explain the results of this study. — To move toward more meaningful data, the study could be conducted using other tools to measure motivation. The data from the studies could then be compared. — To make comparisons and contrasts with other robots, a beneficial research study might be conducted that uses IMMS to study three interventions in parallel, for example, one course using the IPRE robots as the intervention, one using Lego Mindstorms, and one using the Finch. — To make comparisons and contrasts with other intervention techniques that are being included in the instructional design to increase motivation, studies could be conducted using IMMS on software and other hardware devices, including those that may not have the hardware difficulties that have led to student frustrations. Each of these studies could aid in establishing baseline data for measuring various treatments that are used to increase student motivation. 8. CONCLUSION

This research study indicates that using IPRE robots in an introductory CS0 course can motivate non-computer science students to learn programming, specifically by grabbing their attention. Those who have a neutral or strong interest in development feel the most significant effects, while those least interested in development remained uninspired. The designers and developers of the IPRE robot had an initial goal to address a wider audience and to incorporate elements of creativity to motivate students who do not fit the traditional computer science mold [Balch et al. 2008; Kumar et al. 2008; Summet et al. 2009]. In this research setting, that goal was not met. Students who were least interested in development and more interested in design were still less interested and less curious about the robots. Perhaps there is a limit to how attentive students can be when they simply are not interested in development or perhaps another learning environment might be more motivating for these students. Regardless, all of the initial criteria should be independently investigated to determine if the design of the robots is meeting its goals. Relevance is also an issue that should be explored more. Presently, robots are not found in people’s everyday lives, whereas tools like mobile phones and handheld devices are. When considering adding instructional materials in an effort to increase motivation, instructors who want highly relevant materials should carefully consider a tool that is more relevant to students. Whether or not the use of the IPRE robots for non-computer science students is worth enduring the technical frustrations for the students also should be investigated further. Using robots in a classroom is still a greater challenge than software-only systems, partly because of the added hardware component, partly because many robot software systems (libraries and development environments) have not reached the same degree of maturity as other educational software. If these problems cannot be solved, there is a danger of resulting frustrations negating the potential motivational effect. Paving the path for a learning experience that is free from these distractions should be a primary goal for those developing robots for novice students, particularly for those ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:28

M. M. McGill

students who are already inclined by their natural interests to be minimally affected by the course. REFERENCES A DAMS, D. B. 2010. Explore-create-present: A project series for CS. In Proceedings of the ASEE North Central Sectional Conference (ASEE’10). AAUW. 2000. Tech-Savvy: Educating Girls in the New Computer Age. American Association of University Women Education Foundation, New York. A PIOLA , M., L ATTU, M., AND PASANEN, T. 2010. Creativity and intrinsic motivation in computer science education: Experimenting with robots. In Proceedings of the 15th Annual Conference on Innovation and Technology in Computer Science Education (ITiCSE’10). 199–203. B ALCH , T., S UMMET, J., B LANK , D., K UMAR , D., G UZDIAL , M., O’H ARA , K., WALKER , D., S WEAT, M., G UPTA , G., T ANSLEY, S., J ACKSON, J., G UPTA , M., M UHAMMAD, M. N., P RASHAD, S., E ILBERT, N., AND G AVIN, A. 2008. Designing personal robots for education: Hardware, software, and curriculum. IEEE Pervas. Comput. 7, 2, 5–9. B ANDURA , A. 1997. Self-Efficacy: The Exercise of Control. Freeman, New York. B ARKER , L. J., M C D OWELL , C., AND K ALAHAR , K. 2009. Exploring factors that influence computer science introductory course students to persist in the major. In Proceedings of 40th SIGCSE Technical Symposium on Computer Science Education (SIGSCE’09). B ARNES, T., P OWELL , E., C HAFFIN, A., AND L IPFORD, H. 2008. Game2Learn: Improving the motivation of CS1 Students. In Proceedings of Game Development in Computer Science Education (GDCSE’08). B ARR , J., C OOPER , S., G OLDWEBER , M., AND WALKER , H. 2010. What everyone needs to know about computation. In Proceedings of the SIGCSE Technical Symposium on Computer Science Education (SIGCSE’10). B ECKER , B. 2001. Teaching CS1 with Karel the robot in Java. In Proceedings of the 32nd SIGCSE Technical Symposium on Computer Science Education (SIGCSE’01). 50–54. B ESANA , G. AND D ETTORI , L. 2004. Together is better: Strengthening the confidence of women in computer science via a learning community. In Proceedings of the Consortium for Computing Sciences in Colleges, Northeastern Conference (CSCC’04). 130–139. B IERRE , K., V ENTURA , P., P HELPS, A., AND E GGERT, C. 2006. In Proceedings of the 32nd SIGCSE Technical Symposium on Computer Science Education (SIGCSE’01). 354–358. B IGGS, J. B. 1987a. The Study Process Questionnaire (SPQ): Manual. Australian Council for Educational Research, Hawthorn, Vic. B IGGS, J. B. 1987b. The Learning Process Questionnaire (LPQ): Manual. Australian Council for Educational Research, Hawthorn, Vic. B OYER , K. E., P HILLIPS, R., WALLIS, M. D., V OUK , M. A., AND L ESTER , J. C. 2009. Investigating the role of student motivation in computer science education through one-on-one tutoring. Comput. Sci. Ed. 19, 2, 111–135. B RANSFORD, J., B ROWN, A., AND C OCKING, R., Eds. 1999. How people learn: Brain, mind, experience, and school. National Academy Press, Washington, D.C. B YRNE , P. AND LYONS, G. 2001. The effect of student attributes on success in programming. In Proceedings of the 15th Annual Conference on Innovation and Technology in Computer Science Education (ITiCSE’01). 49–52. C HAN, T. S. AND A HERN, T. C. 1999. Targeting motivation – Adapting flow theory to instructional design. J. Ed. Comput. Res. 21, 2, 152–163. C LIBURN, D. 2006. A CS0 course for the liberal arts. In Proceedings of the 37th ACM Technical Symposium on Computer Science Education (SIGCSE’06). 77–81. C OOPER , S. AND C UNNINGHAM , S. 2010. Teaching computer science in context. Inroads 1, 1, 5–8. C OVINGTON, M. V. 2000. Goal theory, motivation and school achievement: An integrative review. Ann. Rev. Psych. 51, 171–200. C SIKSZENTMIHALYI , M. 1975. Beyond Boredom and Anxiety. Jossey-Bass, San Francisco, CA. C RESWELL , J. W. 2008. Educational Research. Planning, Conducting, and Evaluating Quantitative and Qualitative Research. Pearson Education, Upper Saddle River, NJ. C UTTS, Q., C UTTS, E., D RAPER , S., O’D ONNELL , P., AND S AFFREY, P. 2010. Manipulating mindset to positively influence introductory programming performance. In Proceedings of the SIGCSE Technical Symposium on Computer Science Education (SIGCSE’10). 431–435.

ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

Learning to Program with Personal Robots

4:29

D ECI , E. L. AND R YAN, R. M. 1985. Intrinsic Motivation and Self-Determination in Human Behavior. Plenum, New York. D ECI , E. L., S CHWARTZ , A., S HEINMAN, L., AND R YAN, R. M. 1981. An instrument to assess adults’ orientations toward control versus autonomy with children: Reflections on intrinsic motivation and perceived competence. J. Ed. Psych. 73, 642–650. E GBERT, J. 2003. A study of flow theory in the foreign language classroom. Mod. Lang. J. 87, 4, 499–518. E LLIOTT, E. S. AND D WECK , C. S. 1988. Goals: An approach to motivation and achievement. J. Person. Soc. Psych. 54, 5–12. E NTWISTLE , N. J. AND R AMSDEN, P. 1983. Understanding Student Learning. Croom Helm, London. FAGIN, B. AND M ERKLE , L. 2003. Measuring the effectiveness of robots in teaching computer science. SIGCSE Bull. 35, 1, 307–311. F ISHBEIN, M. AND A JZEN, I. 1972. Beliefs, Attitudes, Intentions and Behavior: An Introduction to Theory and Research. Addison-Wesley, Reading, MA. F RIEDMAN, B. A. AND M ANDEL , R. G. 2010. The prediction of college student academic performance and retention: Application of expectancy and goal setting theories. J. Coll. Stud. Retent. 11, 2, 227–246. F RY, H., K ETTERIDGE , S., AND M ARSHALL , S. 2003. A Handbook for Teaching & Learning in Higher Education. Routledge Farmer, New York, NY. G LANZ , K., R IBER , B. K., AND L EWIS, F. M. 2002. Health Behavior and Health Education. Theory, Research and Practice. Wiley & Sons, New York. G OTO, S. T. AND M ARTIN, C. 2009. Psychology of success: Overcoming barriers to pursuing further education. J. Contin. High. Ed. 57, 1, 10–21. G REGG, C. M. 2009. Self-Determination, Culture, and School Administration: A Phenomenological Study on Student Success. ProQuest LLC. G UZDIAL , M. 2008. Teaching computing to everyone. Comm. ACM 52, 5, 31–33. G UZDIAL , M. 2010. Does contextualized computing education help? Inroads 1, 4, 4–6. H EIDER , F. 1958. The Psychology of Interpersonal Relations. Wiley, New York. H OEGH , A. AND M OSKAL , B. M. 2009. Examining science and engineering students’ attitudes toward computer science. In Proceedings in the 39th ASEE/IEEE Frontiers in Education Conference (ASEE’09). H UANG, W., H UANG, W., D IEFES -D UX , H., AND I MBRIE ,, P. K. 2006. A preliminary validation of attention, relevance, confidence and satisfaction model-based instructional material motivational survey in a computer-based tutorial setting. Brit. J. Ed. Technol. 37, 16. H UANG, W.-H., H UANG, W.-Y., AND T SCHOPP, J. 2010a. Sustaining iterative game playing processes in DGBL: The relationship between motivational processing and outcome processing. Comput. Ed. 55, 2, 789–797. H UANG, Y., B ACKMAN, S. J., AND B ACKMAN, K. F. 2010b. Student attitude toward virtual learning in second life: A flow theory approach. J. Teach. Travel & Tour. 10, 4, 312–334. H UNDLEY, J. AND P RITT, W. 2009. Engaging students in software development course projects. In Proceedings of the Richard Tapia Celebration of Diversity in Computing Conference (TAPIA’09). 87–92. I MBERMAN, S. AND K LIBANER , R. 2005. A robotics lab for CS1. J. Comput. Sci. Coll. 21, 2, 131–137. I NSTITUTE FOR P ERSONAL R OBOTS IN E DUCATION. 2011. http://wiki.roboteducation.org. I ROBOT C REATE F ORUM. 2011. http://createforums.irobot.com/irobotcreate/. J IAU, H. C., C HEN, J. C., AND S SU, K. 2009. Enhancing self-motivation in learning programming using game-based simulation and metrics. IEEE Trans. Ed. 52, 4, 555–562. K ELLER , J. M. 1983. Motivational design of instruction. In Instructional Design Theories and Models: An Overview of Their Current Status. C. M. Reigeluth Ed., Lawrence Erlbaum Associates, Hillsdale, NJ, 386–434. K ELLER , J. M. 1987a. The systematic process of motivational design. Perform. Instruc. 26 ,9/10, 1–8. K ELLER , J. M. 1987b. IMMS: Instructional materials motivation survey. Florida State University. K ELLER , J. M. AND S UBHIYAH , R. G. 1987a. Course effort survey. Florida State University. K ELLER , J. M. AND S UBHIYAH , R. G. 1987b. Course interest survey. Florida State University. K INNUNEN, P. AND M ALMI , L. 2008. CS minors in a CS1 course. In Proceedings of the Conference on International Computing Education Research (ICER’08). 79–90. K INNUNEN, P. AND S IMON, B. 2010. Experiencing programming assignments in CS1: The emotional toll. In Proceedings of the Conference on International Computing Education Research (ICER’10). 77–85.

ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:30

M. M. McGill

K INNUNEN, P., M C C ARTNEY, R., M URPHY, L., AND T HOMAS, L. 2007. Through the eyes of instructors: A phenomenographic investigation of student success. In Proceedings of the Conference on International Computing Education Research (ICER’07). 61–72. ¨ K OLLING , M. AND R OSENBERG, J. 2001. Guidelines for teaching object orientation with Java. SIGCSE Bull. 33, 3, 33–36. K UMAR , D., ET AL . 2008. Engaging computing students with AI and robotics. Using AI to motivate greater participation in computer science. Tech. rep. SS-08-08, AAAI Press. L ANDSON -B ILLINGS, G. 1995. Toward a theory of culturally relevant pedagogy. In Curriculum: Problems, Politics, and Possibilities, Beyer and Apple Eds. L ANDRY, C. L. 2003. Self-efficacy, motivation, and outcome expectation correlates of college students’ intention certainty. Doctoral dissertation, Louisiana State University. L ATTA , M. R. 1974. Relation of causal attribution and success to performance. ED102474, ERIC. L AUWERS, T., N OURBAKHSH , I., AND H AMNER , E. 2009. CSbots: Design and deployment of a robot designed for the CS1 classroom. In Proceedings of the 40th Technical Symposium on Computer Science Education (SIGCSE’09). 428–432. L AYMAN, L., W ILLIAMS, L., AND S LATEN, K. 2007. Note to self: Make assignments meaningful. In Proceedings of the 40th Technical Symposium on Computer Science Education (SIGCSE’07). 459–463. L EGO. 2010. LEGO Mindstorms for education. http://mindstorms.lego.com/. L EVESQUE -B RISTOL , C. AND S TANEK , L. 2009. Examining self-determination in a service learning course. Teach. Psych. 36, 4, 262–266. L EVIN, T. AND L ONG, R. 1981. Effective Instruction. Association for Supervision and Curriculum Development, Alexandria, VA. M ARGOLIS, H. 2009. Student motivation: A problem solving focus. http://www.reading2008.com/MotivationProblem Solving Questionnaire-HowardMargolis-2009Jan1c.pdf. M ARGOLIS, J. AND F ISHER , A. 2002. Unlocking the Clubhouse: Women in Computing. MIT Press, Cambridge, MA. M ARKHAM , S. A. AND K ING, K. N. 2010. Using personal robots in CS1: Experiences, outcomes, and attitudinal influences. In Proceedings of the 13th ACM Innovations and Technology in Computer Science Education (ITiCSE’10). 204–208. M ARTIN, A. J. 2003. The student motivation scale: Further testing of an instrument that measures school students’ motivation. Austral. J. Ed. 47, 1, 88–106. M ARTINS, S. W., M ENDES, A. J., AND F IGUEIREDO, A. D. 2010. Diversifying activities to improve student performance in programming courses. In Proceedings of CompSysTech (CompSysTech’10). 540–545. M ASLOW, A. H. 1943. A theory of human motivation. Psych. Rev. 50, 370–396. M AWHORTER , P., S HAVER , E., K OZIOL , Z., AND D ODDS, Z. 2009. A tale of two platforms: Low-cost robotics in the CS curriculum. J. Comput. Sci. Coll., 180–188. M C N ALLY, M. F. 2006. Walking the grid: Robotics in CS 2. In Proceedings of the 8th Australian Conference on Computing Education (ACE’06). D. Tolhurst and S. Mann Eds., vol. 52, Australian Computer Society, Inc., 151–155. M C W HORTER , W. AND O’C ONNOR , B. 2009. Do LEGO Mindstorms motivate students in CS1? In Proceedings of the 40th ACM Technical Symposium on Computer Science Education (SIGCSE’09). 438–442. M ILLS, N., PAJARES, F., AND H ERRON, C. 2007. Self-efficacy of college intermediate French students: Relation to achievement and motivation. Lang. Learn. 57, 3, 417–442. M OSLEY, P. AND K LINE , R. 2006. Engaging students: A framework using LEGO robotics to teach problem solving. Inf. Technol., Learn. Perform. J. 24, 1, 39–45. M UELLER , R. J. 1984. Building an Instrument to Measure Study Behaviors and Attitudes: A Factor Analysis of 46 Items. University of Northern Illinois, De Kalb, IL. O’K ELLY, J. AND G IBSON, J. P. 2006. RoboCode & problem-based learning: A non-prescriptive approach to teaching programming. In Proceedings of the 11th Annual ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE’06). 217–221. P INTRICH , P. 1999. A Manual for the Use of the Motivated Strategies for Learning Questionnaire (MSLQ). National Center for Research to Improve Postsecondary Training and Learning, Ann Arbor, MI. P INTRICH , P. R. AND D E G ROOT, E. V. 1990. Motivational and self-regulated learning components of classroom academic performance. J. Ed. Psych. 82, 33–40.

ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

Learning to Program with Personal Robots

4:31

P ITTENGER , A. AND D OERING, A. 2010. Influence of motivational design on completion rates in online self-study pharmacy-content courses. Dist. Ed. 31, 3, 275–293. R AMALINGAM , V. AND W IEDENBECK , S. 1998. Development and validation of scores on a computer programming self-efficacy scale and group analyses of novice programmer self-efficacy. J. Ed. Comput. Res. 19, 4, 365–379. R HEINBERG, F., V OLLMEYER , R., AND B URNS, B. 2001. FAM: A questionnaire on motivation in learning and performance situations. Diagnostika 2, 57–66. R ICH , L., P ERRY, H., AND G UZDIAL , M. 2004. A CS1 course designed to address interests of women. In Proceedings of the 34th ACM Technical Symposium on Computer Science Education (SIGCSE’04). 190–195. R ODGERS, D. AND W ITHROW-T HORTON, B. 2005. The effect of instructional media on learner motivation. Int. J. Instruc. Media 21, 4, 333–342. R OEBKEN, H. 2007. The influence of goal orientation on student satisfaction, academic engagement and achievement. Electron. J. Res. Ed. Psychol. 5, 3, 679–704. R OUNDTREE , N., R OUNTREE , J., R OBINS, A., AND H ANNAH , R. 2004. Interacting factors that predict success and failure in a CS1 course. SIGCSE Bull. 35, 4, 101–104. R YAN, R. M. 1982. Control and information in the intrapersonal sphere: An extension of cognitive evaluation theory. J. Person. Soc. Psychol. 43, 450–461. R YAN, R. M. AND D ECI , E. L. 2000. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Amer. Psychol. 55, 68–78. R YAN, R. M., F REDERICK , C. M., L EPES, D., R UBIO, N., AND S HELDON, K. M. 1997. Intrinsic motivation and exercise adherence. Int. J. Sport Psychol. 28, 335–354. S CHONWETTER , D. J., ET AL . 1994. Implications for higher education in the linkages of student differences and effective teaching. Annual Meeting of the American Eduactional Research Association. S CHULER , H., T HORNTON, G., F RINTRUP, A., AND M UELLER -H ANSON, R. 2004. AMI Achievement Motivation Inventory: Technical and User’s Manual. Hogrefe & Huber. S IMON, S., R OBINS, A., S UTTON, K., B AKER , B., B OX , I., D E R AADT, M., H AMER , J., H AMILTON, M., L ISTER , R., P ETRE , M., T OLHURST, D., AND T UTTY, J. 2006. Predictors of success in a first programming course. In Proceedings of the 8th Australian Conference on Computing Education (ACE’06). D. Tolhurst and S. Mann Eds., vol. 52, 181–188. S KINNGER , B. F. 1954. The science of learning and the art of teaching. Harvard Ed. Rev. 24, 2, 86–97. S MALL , R. V. 1997. Motivation in instructional design. In ERIC Digest, ERIC Clearinghouse on Information and Technology Syracuse NY. S TANLEY, D. T. AND C AMPBELL , J. C. 1963. Experimental and Quasi-Experimental Designs for Research. Houghton Mifflin Company. S TEVENSON, D. E. AND WAGNER , P. J. 2006. Developing real-world programming assignments for CS1. In Proceedings of the ACM Innovations and Technology in Computer Science Education (ITiCSE’06). 158–162. S UMMET, J., K UMAR , D., O’H ARA , K., WALKER , D., N I , L. L., B LANK , D. L., AND B ALCH , T. 2009. Personalizing CS1 with robots. In Proceedings of the 40th ACM Technical Symposium on Computer Science Education (SIGCSE’09). 433–437. S VINICKI , M. A. 2010. Guidebook on conceptual frameworks for research in engineering education. www.ce.umn.edu/∼Smith/docs/RREE-Research-Frameworks Svinicki.pdf. T AYLOR , F. W. 1916. The Principles of Scientific Management. Bulletin of the Taylor Society. T URNER , E. A., C HANDLER , M., AND H EFFER , R. W. 2009. The influence of parenting styles, achievement motivation, and self-efficacy on academic performance in college students. J. Coll. Stud. Dev. 50, 3, 337–346. T YLER , R. 1971. Theory and practice: Bridging the gap. Grade Teach. May/June, 46–65. U GUROGLU, M. AND WALBERG, H. J. 1979. Motivation and achievement: A quantitative synthesis. Amer. Ed. Res. J. 16, 375–389. VALLERAND, R. J., ET AL . 1992. The academic motivation scale: A measure of intrinsic, extrinsic and amotivation in education. Ed. Psychol. Measure. 52, 1003–1017. V OLLMEYER , R. AND R HEINBERG, F. 2006. Motivational effects on self-regulated learning with different tasks. Ed. Psychol. Rev. 18, 239–253. WAMBACH , C. A. 1993. Motivational themes and academic success of at-risk freshmen. J. Dev. Ed. 16, 3, 8–10, 12, 37.

ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.

4:32

M. M. McGill

W EIBE , E., W ILLIAMS, L. A., YANG, K., AND M ILLER , C. 2003. Computer science attitude survey. Tech. rep. CSC TR-2003-01, North Carolina State University, Raleigh, NC. W EINER , B. 1974. Achievement Motivation and Attribution Theory. General Learning Press, Morristown, N.J. W EISS, R. AND O VERCAST, I. 2008. Finding your bot-mate: Criteria for evaluating robot kits for use in undergraduate computer science education. J. Comput. Sci. Coll., 43–49. W IEDENBECK , S. 2005. Factors affecting the success of non-majors in learning to program. In Proceedings of the Conference on International Computing Education Research (ICER’05). 13–24. W ILSON, B. C. 2006. Gender differences in types of assignments preferred: Implications for computer science instruction. J. Ed. Comput. Res. 34, 3, 245–255. W ILSON, B. C. AND S CHROCK , S. 2001. Contributing to success in an introductory computer science course: A study of twelve factors. ACM SIGCSE Bull. 33, 1. W LODKOWSKI , R. AND G INSBERG, M. 2003. Diversity and Motivation: Culturally Responsive Teaching. Jossey-Bass. X U, D., B LANK , D. AND K UMAR , D. 2008. Games, robots, and robot games: Complementary contexts for introductory computing education. In Proceedings of Game Development in Computer Science Education (GDCSE’08). Y ONGHIU, C. 2010. Study of flow theory and experiential learning. In Proceedings of the 2nd International Conference on Multimedia and Information Technology (MMIT’10). 2, 334–337. Z AINI , Z. AND A HMAD, W. 2010. A study on students’ motivation in learning mathematics using multimedia courseware. In Proceedings of the 2010 International Symposium in Information Technology (ITSim’10). 1–3. Received January 2011; revised April 2011, August 2011; accepted November 2011

ACM Transactions on Computing Education, Vol. 12, No. 1, Article 4, Publication date: March 2012.