Session T3A COURSE-BASED ASSESSMENT: ENGAGING FACULTY ...

0 downloads 0 Views 35KB Size Report
Abstract--The College of Engineering (COE) at the. University of Massachusetts Dartmouth has begun to implement ..... checklist for the small groups to use.
Session T3A COURSE-BASED ASSESSMENT: ENGAGING FACULTY IN REFLECTIVE PRACTICE Judith E. Sims-Knight 1, Emily Fowler2, Nixon Pendergrass3, and Richard L. Upchurch4 Abstract--The College of Engineering (COE) at the University of Massachusetts Dartmouth has begun to implement course-based assessment as part of our curricular continuous improvement program. The targeted faculty are those who are developing innovative courses supported by the Foundation Coalition (FC), a collaborative project funded by National Science Foundation. We began in Spring, 1999, and have since tried five different strategies—taking faculty to a two-day assessment workshop, a general lecture on embedding an assessment-based continuous improvement loop into courses, a written set of guidelines, individual meetings with faculty, and an interactive half-day workshop. We discovered that faculty accept and implement assessmentbased continuous improvement in their classes once they understand that (a) it is in their control, (b) it can be done in ways that are cost-effective in terms of time, and (c) that it can reduce frustration in teaching because it makes visible aspects of courses that can be improved. Index Terms— ABET a-k criteria, Continuous Improvement, Course-based assessment, Student outcomes assessment.

INTRODUCTION Assessment of student outcomes and the incorporation of those results into a continuous improvement loop has been mandated by the Accreditation Board of Engineering and Technology (ABET) as a key criterion in its Engineering Criteria 2000. Furthermore, these outcomes are deliberately geared toward cognitive skills or processes—teaming and communication skills, using modern engineering tools to solve engineering problems, recognition of the need for and ability to engage in life-long learning. Not only must these outcomes be assessed, they must form the basis of plans to improve programs. Thus, consider a program that assesses students’ design skills solely in a capstone course. The students’ design projects are assessed either by an independent reading of the final reports or by grades in the course. The faculty must then use the outcome of that assessment to improve their program. The assessment by grade or by independent rubric gives only limited clues as to what knowledge or processes might be most effectively

improved. Furthermore, it does not tell where in the program improvement is needed. Course-based process assessment embedded into a continuous improvement loop is needed to identify where and how to improve/ Adoption of the continuous improvement model based on class assessment permits the power of the quality paradigm to be applied to students’ development within courses. The quality paradigm became popular in the industry in the United States during the 1980s in response to the continued erosion of industrial viability. The six tenets of the quality paradigm are: emphasis on quality, process perspective, dependence on data, customer focus, defect elimination, and managing for quality [1]. In its adaptation to use in courses, collecting data from students permits faculty to move from reliance on anecdotal incidents and feedback from the extremes (either extremely happy or extremely unhappy students) to data from all students collected in a systematic fashion. Furthermore, by focusing on the processes by which students learn rather than on the products of learning (such as correct answers on exams) faculty have at their disposal, for the first time, the means to make learning visible, by which it becomes understandable, accessible, and eventually controllable. Continuous improvement models have been found to be effective in science [2], writing [3] and software engineering [4-8] courses. Faculty who have not adopted a continuous improvement model typically have one of three models of course assessment. One is the traditional instructional model, informed by traditional course assessments—graded student products (exams, reports, artifacts, etc.) and end-of-course faculty evaluations often mandated as part of the personnel process. Both of these are essentially quantitative and judgmental. Instructors feel successful if enough of the student products merit A’s and B’s. Likewise, they feel successful if their average ratings on survey items such as “Please rate the overall effectiveness of the instructor” are sufficiently above average (or above their peers). If students do poorly or the instructor gets low ratings, the instructor feels bad, but can only guess at ways to remedy the low scores. A second model is the experimental model, in which the goal of the assessment is to determine whether the students who have been through this course do better (often on a common exam) than students in classes with a different approach. Again the focus is on a quantitative judgment of superiority and failure to attain

1

Judith E. Sims-Knight, Department of Psychology, University of Massachusetts Dartmouth, 285 Old Westport Road, No. Dartmouth, MA 02747-2300, [email protected]. 2 Emily Fowler, Department of Institutional Research, University of Massachusetts Dartmouth, [email protected]. 3 Nixon Pendergrass, Department of Electrical and Computer Engineering, University of Massachusetts Dartmouth, [email protected]. 4 Richard L. Upchurch, Department of Computer and Information Science, University of Massachusetts Dartmouth, [email protected].

0-7803-6424-4/00/$10.00 © 2000 IEEE October 18 - 21, 2000 Kansas City, MO 30 th ASEE/IEEE Frontiers in Education Conference T3A-11

Session T3A superiority leads to abandonment of the innovation rather than to improvement. A third model is the pre-post model, in which comparable assessments are given at the beginning and end of the course. This is often called a value-added model because its purpose is to show that students’ performance at the end of the course was due to the course. As with the other two models, the purpose of the assessment is to provide a quantitative judgment of success or failure with no guide for change. In all of these models, the goal of systematic assessment (both student products and faculty evaluations) is to answer summative questions. The only formative feedback typically comes haphazardly from students who are either very impressed or very dissatisfied by the course or by some of its aspects. These sometimes lead to general class discussions about the problems. Such feedback can be very useful, but is of course sporadic, unreliable and not representative. The goal of assessment in a continuous-improvement model is quite different. It is to help faculty and students understand the processes by which the students learn. These processes are rarely visible to faculty because their expertise has made these processes so automatic in themselves that they are not aware of them [3]. These processes are invisible to the students because most college students have not developed sufficient metacognitive awareness to recognize cognitive processes in themselves [3]. To adopt an assessment-based continuous improvement model instructors have to change in three substantial ways. First, they have to create assessments that will reveal students’ processes rather than focus on evaluating students’ products. Second, they have to evaluate those assessments, not in terms of success or failure (theirs or the students’) but rather as identification of opportunities for improvement. Third, they have to adopt an incremental attitude, that is, they must accept that improvement takes place over a number of iterations of the process. The task of institutionalizing the continuous improvement model, then, requires helping faculty to understand and to see the value of adopting this model. This paper documents our first attempts of such an institutionalization process in the College of Engineering (COE) at the University of Massachusetts Dartmouth. The process is supported by the Foundation Coalition (FC), a collaborative project funded by National Science Foundation, which supports an assessment team composed of two faculty, an assessment specialist, and an administrator. We have been applying the continuous improvement model to the institutionalization process itself and are in our third iteration.

PRELIMINARY FORAY (SPRING, 1999) We began by inviting four faculty members to attend a two-day workshop with members of the assessment team. Although the workshop occurred in April, one of the faculty members was able to use a project postmortem survey that semester. This survey provided effective information to him and he was able to plan two substantive course improvements based on the survey. We met

with the same faculty member again to help him develop an additional assessment for his summer course.

PILOT (FALL, 1999) Our first attempt to help faculty develop assessment plans at the beginning of a course and then carry them through was in the Summer and Fall, 1999 semesters. We targeted four engineering faculty and two mathematicians who were developing innovative courses supported by the FC. We invited them to a general lecture on course-based assessment in August. Three of the six faculty attended. We then created a set of guidelines and a question-andanswer form and distributed them with a request to create a plan by a date in September. We received two plans, both from faculty who had been involved in prior outcome assessment efforts. We then held individual meetings with each of the six. At the end of the semester faculty were asked to complete a question-and-answer web-based form about their evaluation of the results of their assessments and their reflections on how to change their course and the assessments for the next time they teach the course. We received five of the six reports. Each of the five had tried at least one systematic feedback (two used exam questions, four used a survey, one had asked each team to email their suggestions to improve the course). Two had completed the feedback loop on at least one assessment and had decided on concrete plans to improve their courses. We asked the faculty to evaluate the assessment process and the help from the assessment team and used their feedback to design the subsequent workshop.

INSTITUTIONALIZATION (SPRING, 2000) During the midsemester break we invited the entire College of Engineering faculty to a workshop. The 25 faculty who attended were promised a $200 stipend for attendance and creation of an assessment plan. During the workshop we gave two introductory mini-lectures followed by small group work. The task for Part 1 was to use an extant assessment (e. g., exams, project reports, papers) and figure out how to embed it into a continuous improvement loop. The mini-lecture described in general why and how to do course-based assessment and presented one example of a complete continuous improvement loop. In Part 2 the task was to select either an ABET student learning outcome (e. g., writing, lifelong learning, teaming) or a particular pedagogical innovation (e. g., small groups, use of computer tools) and to think through an assessment tool. Examples of outcomes, pedagogical innovations, and tools were described. The faculty had access to a website created specifically for the workshop (http://www.fcae.umassd.edu/ fcassessment/cbawksp/cbamenu.html) that included the slides from the mini-lectures, examples of assessment tools, reports of assessment-driven continuous improvement loops, and forms to guide them in developing their own plans. The 25 participants were asked to submit an assessment plan either during or after the workshop. To facilitate this process we

0-7803-6424-4/00/$10.00 © 2000 IEEE October 18 - 21, 2000 Kansas City, MO 30 th ASEE/IEEE Frontiers in Education Conference T3A-12

Session T3A provided web-based forms that provided structure by asking specific questions.

THE FACULTY PLANS Of the 25 who participated in the workshop 20 completed Part 1 of the assessment plan (incorporating an extant assessment into a continuous improvement loop) and 12 completed Part 2 (creating an assessment for an ABET a-k outcome or for a pedagogical innovation). We categorized the outcomes the participants chose according to the ABET a-k outcomes (for both parts): Table 1 Number of Participants Who Submitted Plans as a Function of Outcome No. Plans* Student Learning Outcome 11

*

an ability to apply knowledge of mathematics, science, and engineering

1

an ability to design and conduct experiments as well as to analyze and interpret data

4

an ability to design a system, component, or process to meet desired needs

1

an ability to identify, formulate, and solve engineering problems

6

an ability to use the techniques, skills, and modern engineering tools necessary for engineering practice

3

a recognition of the need for and an ability to engage in lifelong learning

3

an ability to function on multidisciplinary teams

5 an ability to communicate effectively The number of plans categorized here is greater than 32 because two plans assessed two outcomes at once.

The 17 responses from the first four categories in Table 1 included all but 3 of the Part 1 plans (the 3 exceptions all chose to assess communication). Of the 17 plans 13 used quizzes, exams, or standardized tests as the medium for assessment; one used class assignments, one used small working groups and one used oral presentations. Seven faculty focused their assessment on concepts. They specified one or more of the following strategies for evaluating concepts--3 planned to differentiate between conceptual understanding and ability to apply mathematics, 4 planned to chart improvement across the semester (at designing, or solving problems of a certain type), and 6 planned to identify concepts students failed to master. The other 8 faculty focused on the processes by which students solve problems. Six of these had students do the problems in a series of steps (see Appendix A for an example), and two plans had students first think through possible strategies for solutions (e. g., list the potential design strategies), then choose one solution and justify the solution.

Faculty found it difficult to anticipate how they might change their courses on the basis of their evaluation. In fact, the only faculty who answered that question were the 6 faculty who focused on which concepts students learned and 1 faculty who focused on types of justifications used in problem solving. The other 9 plans had no answer. We analyzed the other 17 plans by outcome. Of the six plans that assessed the ability to use modern engineering tools, two planned to assess students’ competence at using the tools to solve engineering problems and four planned to use surveys or interviews to assess students’ reactions. The questions these faculty wanted to answer were: • Were the students able to learn to use the tools and what problems got in their way? • Did the students appreciate how the tools could enhance their problem solving? • Could the students use the tools effectively to solve problems? An example of one participant’s thoughtful analysis of these questions is: “Did they understand how to select an appropriate tool to use? Did they use the tool as it was designed and to the necessary level of complexity. Was time spent more in tool learning or use? Did all members of experimental team use and understand tools?” The five faculty who chose to assess communication skills chose a variety of assignments: two chose project reports, two chose laboratory reports, and one chose a critique of a technical article. They chose varying frequency of assessment: two chose to assess a single paper, two chose weekly reports, and one chose two reports. Four of the five included an oral presentation in their assessment, although generally they chose to do only one oral presentation. They all presented rubrics for assessment and these varied in complexity. For the written papers they ranged from a single global grade to a six-component, weighted scale (see Appendix B). Three participants chose to assess teaming. Two decided to use or adapt the team process checks of Arizona State University, which was provided on our website. The third did not specify. Three faculty chose to assess life-long learning; they used very different strategies. One planned to use a variant of a metacognitive survey that was developed locally and provided on our website. Another planned a global evaluation of whether students will have succeeded at a semester-long project. The third planned to assess the extent to which students performed differently on an open-book compared to a closed-book portion of the same exams to evaluate whether students were able to use resources (the text) effectively.

POSTMORTEM OF WORKSHOP At the end of the workshop we held a group discussion about the usefulness of the workshop. Many participants found the website very useful. They felt that it organized the whole workshop and they liked not taking notes. The only problem we encountered

0-7803-6424-4/00/$10.00 © 2000 IEEE October 18 - 21, 2000 Kansas City, MO 30 th ASEE/IEEE Frontiers in Education Conference T3A-13

Session T3A with the website was that the slide presentation was part of the website and linked to parts of the website. Although this was good in that the faculty could easily access the relevant part of the website during the presentation, it gave them no practice navigating the website from its menu and this led to confusion. There were several requests for add the following functions to the website: (a) capacity to revise sample assessment tools on line, (b) capacity to download files, and (c) capacity for students to take surveys via the web. Participants also liked the smallgroup work. They felt, however, it would have been more productive had it been clear that they would not be working on specific courses in the groups, but on generic issues. They had no difficulty talking across courses, but they wasted time figuring out that that was what they should do. They also would have liked to have been advised to bring materials to the workshop. The biggest criticism of the workshop was our asking them to do two different assessments, one based on their traditional graded assignments and one based in an ABET a-k outcome. They found it confusing and they disliked changing their thought patterns. They thought it would be much more effective to work on different aspects of the same project in the two parts. They also felt that the changing projects from Part 1 to Part 2 contributed to their having too little time. The faculty found the forms we provided to help them create their plans somewhat confusing. Part of the confusion was clearly due to the two parts, which did not make sense to them. A second source of confusion was the lack of a common language. For example, the word “evaluation” traditionally means assigning grades to instructors, but we wanted them to evaluate students’ cognitive processes. One participant suggested our course-based assessment interventions needed to be coordinated with the rest of the campus.

CONCLUSIONS Our initial attempts to help faculty move to an assessment-driven course-based continuous improvement model convinced us that the central problem was that they all had preexisting assessment models from their past experiences teaching and/or reading or doing educational research. We had to show them that course-based assessment was not about evaluating the adequacy of either students or instructors, that any data they collected could be kept to themselves (although we asked them to write reports), and that they needed to think through how to make assessments that would inform decision making about their courses. Both the individual meetings and the second workshop were successful in getting faculty to think in terms of course-based assessment, but the workshop had the added advantages of systematic presentation, the website, and sharing with colleagues. The Faculty Plans For Part 1 outcomes faculty used basically one of two strategies, both of which were meaningful in the continuous improvement

model. They either chose to assess specific concepts or specific subprocesses such as demonstrating understanding what the problem asks for, selecting alternative designs, or justification of solutions. Either strategy will work as a guide to course changes. Faculty also were able to avoid issues of evaluation of individual students, although the small-group work demonstrated that it was not easy for some. The improvement plans (which we asked for only in Part 1) were sketchy or nonexistent, but this seems appropriate. The process improvement model has developed because the secrets of students’ processes are difficult to ferret out and will only become apparent in repetitions of the loop. Far fewer faculty made Part 2 plans—12 of the 20. Those who did all succeeded at the basic goal—to create an assessment that they could use to evaluate whether students were meeting that goal. Part 2 plans were differentially effective in providing guides for improvement, but it would be unrealistic to expect faculty to be able to both create an assessment strategy and think through its consequences in one hour. The feedback reports we are requesting at the end of the semester should clarify for the faculty the kinds of assessment needed to foster improvement. As one faculty in the pilot project pointed out in his feedback report, “Having to fill out the report forces one to give it serious thought.” The Workshop The main workshop was successful in its primary goal of getting faculty started on thinking of assessment as a tool to help them improve their courses. The strengths of the workshop were: • The website that organized it • The structure of the workshop into short lecture and demonstration followed by work in small groups on common issues. • The requirement that faculty make written assessment plans during or soon after the workshop and a feedback report after the semester is completed. Clear improvements would be: • Make the workshop a full day rather than a half-day. • Have faculty think through in advance what they wanted to work on so that they could bring course materials. • Provide web support for faculty revising or developing surveys, etc. during the workshop. • Provide an overview of the ingredients of the plan as a checklist for the small groups to use. • Change the parts. The two parts confused the faculty. We think that was we couched Part 1 in terms of changing an assessment they already do into one that can be used to improve the course. That made it difficult to map it onto the ABET a-k outcomes. It would have been better to have them choose two outcomes they cared about. For faculty who have not been thinking in terms of outcome assessment, it might be more useful to choose some concept or skill

0-7803-6424-4/00/$10.00 © 2000 IEEE October 18 - 21, 2000 Kansas City, MO 30 th ASEE/IEEE Frontiers in Education Conference T3A-14

Session T3A that they know their students have difficulty with so they can focus on figuring out where students go wrong. The Next Step The assessment plans serve a second purpose—of identifying a group of faculty who are struggling with common assessment problems. We plan to form two groups--one on communication and one on using modern engineering tools--to share, to hopefully develop a coordinated effort among them and to eventually create a college-wide strategy for implementing outcome assessment in those areas.

References [1]

Fox, C. & Frakes, W., “The Quality Approach Is It Delivering?” Communications of the ACM, 40, 1997, pp. 2529.

[2]

Miller-Whitehead, M. “Tennessee TCAP Science Scale Scores: Implications for Continuous Improvement and Educ ational Reform or Is It Possible to Beat the Odds?” Annual Meeting of the American Educational Research Association, Montreal, Quebec, Canada, April 19-23, 1999.

[3]

Zhao, J. J. et al. “The Effect of Continuous Process Improvement on the Quality of Students’ Business Writing,” Delta Pi Epsilon Journal, 37(3), pp. 142-158.

[4]

Upchurch, R., & Sims-Knight, J. E., “Reflective Essays in Software Engineering Education,” Frontiers in Education Conference, San Juan, Puerto Rico, November, 1999.

[5]

Upchurch, R., & Sims-Knight, J. E., “The Acquisition of Expertise in Software Engineering Education,” Frontiers in Education Conference, Tempe, AZ, November 4-7, 1998.

[6]

Upchurch, R., & Sims-Knight, J. E., “Integrating Software Process in Computer Science Curriculum,” Frontiers in Education Conference, Pittsburgh, PA, November 5-8, 1997.

[7]

[8]

Upchurch, R., & Sims-Knight, J. E., “Designing Process-Based Software Curriculum,” Proceedings of the Tenth Conference on Software Education and Training, Virginia Beach, VA, April 13-16, 1997. Los Alamitos: IEEE Computer Society Press, 2838. Upchurch, R., & Sims-Knight, J. E., “In Support of Student Process Improvement,” Proceedings of CSEE&T'98, February 22-25, 1998 in Atlanta, Georgia.

Appendix A Sample Assessment Plan This plan is for a calculus course that is part of the integrated freshman year for engineers. For the last two years this instructor has administered 18 common exam questions to all calculus sections as part of the assessment of the integrated freshman year. In his evaluation/feedback report he identified applied problems as the students’ weakest ability. In his assessment plan for the next semester (Spring 2000) he proposed the following assessment plan:



Design three special quizzes that consist of one applied/word/integrated math problem. These quizzes will be administered at the beginning, middle and the end of the semester. • Each problem will be graded for the following criteria: - Have the students read and understood the problem? As part of the problem solution, the students will be required to write a paragraph stating their understanding of the problem, the given conditions of the problem, and the required output of the problem. - Have the students used the correct mathematical model? The student will be required to specify the mathematical model that they will use to solve the problem. They will also be required to give reasons why they selected that particular model. - Have the students solved the problem correctly, that is, have they done the correct calculus, algebra, arithmetic, and visualization? The students will be required to show all the steps (work) in their problem solution. - Have the students understood what their results represent? The students will be required to write a paragraph summarizing their results, stating conclusions, and interpreting the meaning of their computed results. • After each quiz, the students will be asked to write a paragraph describing the difficulties they encountered while trying to solve this problem.

Appendix B Rubric for Evaluation of Design Report Title of the Project: Water Rocket Design Team # ___ ORAL Student's name:

(

)

Student's name:

(

)

Student's name:

(

)

Student's name:

(

)

Report Organization and Completeness Problem & Objective Definition Constraints & Assumptions Fin & Rocket Design Originality of Ideas & Concepts Analysis of the Results Launching the Rocket Overall Quality of Report

2 4 2 2 2 2 2 4 4

6

8

10 ____

4 6 8 10 ____ 4 6 8 10 ____ 4 6 8 10 ____ 4 6 8 10 ____ 4 6 8 10 ____ 8 12 16 20____ 8 12 16 20____ TOTAL

0-7803-6424-4/00/$10.00 © 2000 IEEE October 18 - 21, 2000 Kansas City, MO 30 th ASEE/IEEE Frontiers in Education Conference T3A-15