Participants' Perceptions of Fair and Valid ...

3 downloads 51709 Views 642KB Size Report
This chapter reports on the views of a selection of Bachelor of Music students and ... teachers have in understanding assessment from a student's point of view.
Participants’ Perceptions of Fair and Valid Assessment in Tertiary Music Education Melissa Cain Queensland Conservatorium Griffith University

Abstract This chapter reports on the views of a selection of Bachelor of Music students and their teachers at the Queensland Conservatorium Griffith University (QCGU), providing important insights into how current assessment practices influence student learning in the Australian context, and with particular reference to the Threshold Learning Outcomes for the Creative and Performing Arts. Themes addressed include the role of teacher feedback, experience with self- and peer assessment, the role of exemplars in standards-based assessment, balancing holistic and criteria-based assessment practices, subjectivity in assessing conceptualization in creative works, and the role of tacit knowledge in students fully understanding and applying assessment criteria. Results of focus group sessions with students in the Performance, Musical Theatre and Composition streams of the Bachelor of Music degree, reveal that participants are enthusiastic about ensuring that assessment practices and teacher feedback enhances their growth as musicians, ultimately enabling them to become self-regulated learners. Their teachers are equally concerned about providing their students with high professional standards as reference points for their musical growth, and ensuring that summative assessments of musical performances are fair and valid. Introduction Most tertiary music students have strong views about why assessment is conducted and what constitutes fair and valid assessment practices. Regardless of the accuracy of their views, student perceptions of assessment are crucial, as they greatly impact their educational experiences and choices, and closely compare with the approaches they take in tackling academic tasks (Brown & Hirschfeld, 2008; White, 2009; Marton & Säljö, 1997; Struyven, Dochy & Janssens, 2005). In fact, assessment constitutes “probably the single biggest influence on how students approach their learning” (Rust, O’Donovan, & Price, 2005, p. 231).

1

This paper reports on the results of focus group sessions with a selection of Bachelor of Music students and their teachers at the Queensland Conservatorium Griffith University (QCGU), providing important insights into how current assessment practices influence student learning at this institution. Interview sessions with students in the Performance, Musical Theatre, and Composition streams centered on their understanding and opinions of the Threshold Learning Outcomes for the Creative and Performing Arts (CAPA TLOs) and the extent to which they believe assessment at QCGU aligns with these outcomes and the Griffith University Graduate Attributes. Sub themes include interpreting teacher feedback, experiences of self- and peer assessment, the place of exemplars in standards-based assessment, balancing holistic and criteria based assessment practices, assessing conceptualization in creative works, and the role of tacit knowledge in students fully understanding and applying assessment criteria. Teacher interviews focused on the relevance and application of the CAPA TLOs, recent changes to the assessment of creative works, and ensuring a balance between traditional holistic methods of evaluation and requirements to mark to predetermined criteria. Review of Relevant Literature In order to place results from the focus group sessions in context, the following review of the literature provides a general picture on how higher education students view traditional and innovative assessment practices. More specific information on assessment in the creative and performing arts is also addressed. Student perceptions of assessment Brown and Hirschfeld, (2008) have explored the challenges associated with expressing standards and criteria in ways that students can fully comprehend, as well as the difficulties teachers have in understanding assessment from a student’s point of view. Their research suggests that in general, teachers are of the belief that assessment improves both teaching and learning and that it makes students, teachers, and schools accountable for learning. Despite the fact that students view assessment criteria as being useful in preparing their work, Brown and Hirschfield’s (2008) research reveals that students generally view traditional assessment tasks as “arbitrary, irrelevant, inaccurate, and simply a necessary process for accruing marks” (p. 5). Moreover, most students perceive assessment’s primary purpose as summarising their achievement rather than improving learning or as motivation for further learning. As a result, many students will not consider course material important if it is not assessable. Fautley notes that a common misconception on the part of students is that assessment is often considered as

2

an activity separate from teaching “so that there is a linear progression from teaching, to learning, finishing up with assessment” (2007, p. 1). Somewhat concerning, Rawson states that students view traditional authoritarian models of assessment as something which is done to them, and not with them (2000). The literature reveals that in in the eyes of students, traditional forms of assessment 

are frequently unclear and as such, students are often expected to guess the nature of assessment requirements (Hodgeman, 1997)



regularly cover only a part of stated course material and are summative in nature (McLaughlin & Simpson, 2004)



allow for teacher bias and subjectivity (Brown & Hirschfeld, 2008)



are generally negative, having a detrimental effect on the learning process (Sambell et al., 1997).

The Griffith University Assessment Policy (2013) states that assessment methods should be selected on the basis of their impact on desired student learning behaviours and outcomes, their validity (whether an assessment actually assesses what it is supposed to be assessing) and their reliability (an assessment task’s capacity to produce the same result in relation to the same level of performance). Assessment practices within the University are based on the principles of criteria-based assessment where “desired learning outcomes for a course of study are clearly specified; assessment tasks are designed to indicate progress towards the desired learning outcomes; and the assessment grade is a measure of the extent to which the learning outcomes have been achieved” (2013, p. 2). Assessment in the Creative and Performing Arts As with any other discipline, assessment procedures in the creative and performing arts must be reliable and valid, however, due to “the unique and highly individual nature of musical endeavour and achievement” (Cox, 2010, p. 3) the assessment of music students should ideally be individualised and flexible. Therefore, Cox suggests that assessors need to display a combination of “acute artistic sensitivity, consistency of judgement, and awareness of benchmark levels of student achievement in Higher Education” (2010, p. 3). An important issue specific to assessment in the creative and performing arts concerns the methods that teachers use to assess a student’s creativity and to decide if works are the outcome of genuine creative imagination. As creativity is generally understood to be a

3

process—the end product does not typically figure into an analysis of creative ability— challenges occur in assessing such abstract entities. Cowdroy and De Graaff (2005) state therefore, that “creative ability…is therefore at best inferred, but not assessed” (p. 511). As a way forward, they suggest that despite not being able to directly assess conceptualisation nor schematisation, teachers can assess “a student’s understanding of both the concept and its schemata and their place in the theory, philosophy and literature” (p. 515). This approach involves a departure from teacher-derived criteria for examination of a work, to “studentderived criteria for assessment of the student’s understanding of his or her own concept in terms of the philosophical and theoretical frameworks of the relevant field of creativity” (p. 515). Other issues specific to assessment in the creative and performing arts concern the role and place of teacher subjectivity and the balancing of holistic and criteria-based evaluation in the assessment of student performances. Hay and MacDonald (2008) note that students often report concerns about subjectivity in the evaluation of their performances. They suggest that these concerns may be indeed be valid, as music teachers—in their capacity as professional musicians—tend to internalise performance criteria and standards over time. Thus, their judgments are often based on a loose reliance on the institutional criteria which then serve as contexts for the teachers’ own values, beliefs and standards for performance. Such a divergence from prescribed criteria and standards can also occur when teachers take into account other irrelevant factors such as student behavior, attitudes, and motivation as well as their memory of past performances. With these factors at play, the possibility of a certain degree of invalidity and unreliability remains (Hay & MacDonald, 2008). In the creative arts, teachers’ professional judgements often include holistic appraisal in addition to criteria-based evaluations of performances and creative works. Sadler (2009) notes that one of the challenges of using explicit grading models is, however, that “experienced assessors often become aware of discrepancies between their global and their analytic appraisals” (p. 7). McPherson and Thompson (1998) and Stanley, Brooker and Gilbert (2002) therefore call for an approach to performance assessment implementing complementary aspects of holistic and criteria-based methods “to ensure reliable and valid evaluations of student achievement” (Blom & Poole, 2004, p. 115). In suggesting a way forward, Sadler—as detailed in chapter two of this book— encourages students to develop facility in making “holistic judgments for which criteria emerge during the process of

4

appraisal” (2009, p. 18). Indeed, equipping students with “holistic evaluative insights and skills” (2009, p. 21) is essential and reduces the need for teacher-derived feedback. Sadler (2013) suggests that there are strong grounds for being wary about the exclusive use of criteria-based methods, and the negative impact they have on the development of students’ skills for ascertaining quality in a global manner, reminding us that “much more than we give credit for, students can recognize, or learn to recognize, both big picture quality and individual features that contribute to or detract from it” (p. 63). Students need to understand what constitutes quality at both global and micro levels and must develop a vocabulary for use in their evaluations. If students can be inducted into the practice of holistic assessment, they will become better able to monitor the quality of their own work while it is under production. Formative assessment and teacher feedback Perhaps the most important aspect of assessment in the creative and performing arts concerns students gaining an understanding of how formative assessment can contribute to advancing their learning, and how best to utilise teacher feedback as a tool for improving their work. Bryce (1997) describes learning as “an ongoing process where students are continually receiving information, interpreting it and connecting it to what they already know” (p. 25). Lebler (2008) recommends that ideally, assessment processes should reflect the kinds of evaluations students should be able to employ after graduation, and that students need to develop “the inclination and ability to be their own first marker if they are to continue to learn independently and effectively” (2008, p. 4). The ultimate aim of assessment and feedback, therefore, should be to empower students to become self-regulated learners (Boud, 2000). The important work of D. Royce Sadler in this area has assisted educators to better understand students’ perceptions of formative assessment tasks and the challenges students face in effectively utilising the feedback they receive. For students to successfully apply and adapt knowledge, Sadler stresses they must be able to “interact with information and skills”; “incorporate them into their existing knowledge bases and structures” and “‘construct’ or build knowledge that will serve them as adaptable personal capital” (2013, p. 56). He posits that traditional, authoritarian methods of relaying feedback—that is teachers imparting knowledge for students to learn—are ineffectual for complex for divergent learning tasks such as musical performances or other creative works, which require high levels of aesthetic appreciation. 5

When teachers transmit feedback, there is the often an incorrect assumption that messages are clearly received by students and are easily decoded and translated into action though there is considerable evidence that feedback is “invariably complex and difficult to decipher, and that students require opportunities to construct actively an understanding of them before they can be used to regulate performance” (Nicol & Macfarlane-Dick, 2006, p. 201). In fact Sadler states that although feedback should play a critical role in improvement, in practice it often “seems to have no or minimal effect” (2013, p. 55). Exposing students to the full set of criteria and the rules for using them, allows them to build up a body of evaluative knowledge in order to determine what constitutes quality and to more efficiently interpret teacher feedback. Sadler (2013, 2009) has identified the critical role that tacit knowledge plays in students fully understanding and applying assessment criteria and feedback. In order for students to have a clear understanding of the acceptable level of quality on a task, students must be able to assess their work in the same manner their teachers do. This requires not only being familiar with set criteria, but with more salient criteria which influence a teacher’s qualitative and holistic assessment of a work (Sadler, 2010). Such knowledge and skills cannot be imparted through explicit teaching alone, and thus students must experience and be inducted into non-overt methods of making judgements about quality. Rust et al. stress the difficulty in developing such tacit knowledge as this involves years of “observation, imitation, dialogue and practice” (2003, p. 152). Sadler identifies three basic requirements for students to successfully self-regulate their work: [T]hey acquire a concept of high quality and can recognize it when they see it; they can with considerable accuracy judge the quality of their works-in-progress and connect this overall appraisal with particular weaknesses and strengths; and they can choose from their own inventories of potential moves those that merit further exploration for improving quality. (2013, p. 54) This involves being able to attend to two facets of their work: a holistic evaluation of its quality and smaller separate aspects of quality as gained through both explicit and tacit knowledge of assessment criteria. Ultimately, suggests Sadler, the crucial test of a student’s understanding of abstract criteria is not whether they can define it formally, it is whether they can use the criteria to explain judgements about their own work and to make assessments about quality in the work of others. For feedback to be effective, students need a sound working knowledge of three

6

concepts: task compliance, quality and criteria. These assessment concepts must be understood “not as abstractions but as core concepts that are internalised, operationalised and applied to concrete productions” (2010, p. 548). Self- and peer assessment In assisting students to become masters of self-evaluation, the challenge for music educators suggests Bryce, (1997) is to move away from the assessment of outcomes to the assessment of process through the development of authentic or performance-based assessment strategies which replicate real-world situations. Providing students with opportunities to engage in nontraditional forms of assessment, adequately supported by training in such methods, and to develop an understanding of academic, professional and industry standards can deepen their understanding of what constitutes quality outcomes in a specified area (Spiller, 2011). Alternative methods such as portfolios, and self- and peer assessment strategies have been found to receive more positive support from students than more common traditional assessment tasks, and are viewed by teachers as encouraging deep-level learning and critical thinking (Sambell et al., 1997). Daniel (2004) notes, however, that despite considerable attention given recently to self- and peer assessment at the tertiary level, there exists the potential for resistance to such methods, as they challenge the authority of teachers as experts in this area. White (2009) suggests that students have strong views about peer assessment methods such as awareness of their own deficiencies in subject areas; not being sure of their own objectivity; fairness of the peer assessment process; the influence of such factors as friendship and hostility on their assessment; and the belief that it is not their job but the teachers’ to assess. (p. 397) Wen and Tsai’s (2006) investigation of university students’ attitudes towards peer assessment also revealed that students generally hold positive attitudes towards this type of method but were cautious of being criticized by their peers and expressed a lack of confidence in their ability to assess their classmates. Stefani (1998), McMahon (1999), Butcher and Stefani (1995), Rainsbury, Hodges, Sutherland and Barrow (1998), Somervell (1993) and Boud (1995) all focus on the impact of teachers’ power in grading student work. As this power is usually considered absolute or sovereign by students, McMahon (1999) argues that students will in fact “seek to please [teachers] rather than demonstrate their learning in assessment” (p. 653). If the student’s self-

7

assessment is accepted, under the notion of sovereign power it will then be considered subservient to the decisions of the teacher, as student self-assessment is primarily judged in terms of the teacher’s assessment. Therefore, summative self-assessment cannot be claimed to empower students if teachers intervene when they believe the marking process or outcomes to be unsatisfactory. Race (1998) and Bostock (2000) list many important advantages of the peer assessment process, including that it gives students a sense of belonging to the assessment process; encourages a sense of ownership of the process; makes assessment a part of the learning process; encourages students to analyze each other’s work; helps students recognize assessment criteria; and develops a wide range of transferable skills that can be later transferred to future employment. (Majdoddin, 2010, p. 401) Spiller (2011) lists features of poor practice such as when “it is a one-off event without preparation”, “staff retain control of all aspects (sometimes despite appearances otherwise)” and “it is tacked on to an existing subject in isolation from other strategies” (p. 9). In order to successfully implement self-assessment, Spiller (2011) suggests that intensive conversations with students need to occur in order for the teacher to explain the assumptions and principles that underlie this alternative assessment practice. Students need to be coached using examples and models and should be involved in establishing their own criteria. Majdoddin (2010) recommends that peer and self-assessment be considered together as they share many advantages, for when students evaluate their peers’ work they have the opportunity to scrutinise their own as well. However, the literature suggests that selfassessment enhances learning most effectively when it does not involve grading. In Australia, Vu and Dall’Alba (2007) investigated university students’ experiences of peer assessment. While some concerns were noted about the possibility of unfair and inaccurate marking, Vu and Dall’Alba and found that “peer assessment had a positive effect on students’ learning experiences with most students acknowledging learning from both the process and from their peers” (p. 399). Several essential conditions for the successful implementation of peer assessment were noted as providing adequate and appropriate preparation for the successful implementation; specifying the objectives of the course as well as the purpose of peer assessment; determining the degree of teachers’ assistance given during the peer assessment

8

process; and teachers’ handling of fruitful discussion periods following peer assessment. (p. 399) The literature suggests that students become better at peer assessment over time, as they gain confidence and become more competent at it. As a precursor to introducing peer assessment, teachers should ideally spend time establishing an environment of trust in the classroom. As with self-assessment, Spiller (2011) believes that introducing grading and marks to peer assessment complicates the process as students tend to become preoccupied with grades at the expense of the process itself. Focus Group Sessions Themes from student focus group sessionss Focus group sessions with Bachelor of Music students from QCGU were conducted as part of data collection for the Assessment in Music project (as detailed in chapter four of this book). The students interviewed were in their second and third years of the Performance, Musical Theatre and Composition streams of the degree. Although conversations centred on the students’ understanding of the Threshold Learning Outcomes for the Creative and Performing Arts (CAPA TLOs), many of the important themes in the literature were addressed when discussing assessment processes in the Bachelor of Music program, and the extent to which such practices aligned with the TLOs. The CAPA TLOs are as follows: Upon completion of a bachelor degree in Creative and Performing Arts, graduates will be able to 1. Demonstrate skills and knowledge of the practices, languages, forms, materials, technologies and techniques in the Creative and Performing Arts discipline. 2. Develop, research and evaluate ideas, concepts and processes through creative, critical and reflective thinking and practice. 3. Apply relevant skills and knowledge to produce and realise works, artefacts and forms of creative expression. 4. Interpret, communicate and present ideas, problems and arguments in modes suited to a range of audiences. 5. Work independently and collaboratively in the Creative and Performing Arts Discipline in response to project demands.

9

6. Recognise and reflect on social, cultural and ethical issues, and apply local and international perspectives to practice in the Creative and Performing Arts Discipline. (Holmes & Fountain, 2010). The sessions were particularly fruitful as the students were keen to share their experiences and to have a vehicle for expressing their views on current assessment practices. Participants were enthusiastic about ensuring that assessment practices and teacher feedback enhanced their growth as musicians, ultimately enabling them to become self-regulated learners and to assess their performances against an industry standard. Student feedback is highly valued at QCGU. At the end of each semester, each course is evaluated through a centrally administered survey and all students are invited to contribute their feedback on teaching and course content. Course convenors then take this feedback into account when formulating the next iteration of the course and changes made in response to student feedback are noted on the Electronic Course Profile (ECP). Coding of student focus group transcripts has yielded six key issues: 1) Perceptions of fair and valid assessment, 2) Perceptions of ineffective assessment, 3) Experiences of self- and peer assessment, 4) Gaining an understanding of academic, professional, and industry standards, 5) Subjectivity in assessment practices and 6) The role of holistic evaluations of quality. 1. Perceptions of fair and valid assessment There was a close correlation between the expectations for fair and valid assessment in the Griffith Assessment Policy (2013) and the courses which were viewed by the students as including examples of effective assessment practices. Such courses were those that addressed micro-level determinants through multiple specific marking criteria, and those which allowed for progressive and holistic evaluations of quality. Students were able to quickly identify examples of assessment tasks which they felt were clearly expressed, which had transparent criteria, and which closely correlated to what was taught in the course and the stated learning outcomes. Such assessment tasks were considered as assisting students to grow as musicians towards specified standards. Peter: That class is planned down to the minute. [name of lecturer] knows exactly what she wants to achieve in every class. It's a lot of work, but it's definitely worthwhile, because you do get a lot of good skills out of it. It's all in the course profile. You know 110 per cent what you're expected to do.

10

James: Transparent is definitely the word for it. There's no ambiguity as to what you're expected to do. [Name of lecturer] shows [the rubric]to us and explains it and then once she's marked it, we can look at it. So we can [gain feedback]even for each individual task. 2. Perceptions of ineffective assessment Clearly expressed criteria and teacher feedback was of critical importance to students for them to successfully refine their work (both as performers and creators) and for knowing how their work related to both academic and industry standards. Conversely, minimal, absent and ambiguous feedback was suggested as being detrimental to the students’ learning. Terry: The criteria in our course profile for [name of subject] is very vague. It's just so subjective on what the teachers themselves want out of the course. [Students in another stream] have a pretty good idea of what they need to be doing, and the skills they need to learn. We don't have that at all, so we're all sort of going blindly through our course. Peter: It not outlined clearly; it's just what we've been able to garner. The feedback we normally get is really minimal—an arbitrary number out of a 100—we don't see a breakdown of that. And we have no idea what they liked about our previous [task] as in how to improve on it. These perceptions correlate with reports in the literature which suggest that students are most frustrated when teachers do not return work in a timely manner or do not engage in discussion about merits and improvement strategies (Brown & Hirschfeld, 2008, Hodgeman, 1997). Thus, the students interviewed considered passing such courses as a necessary requirement for accruing marks rather than an opportunity to gain progressive knowledge and skills, as supported by Brown and Hirschfield (2008). 3. Experiences of progressive, self- and peer assessment Alternatives to traditional authoritarian forms of assessment were valued by the students (Sambell et al., 1997) who expressed desire to have more experience in such tasks. Progressive assessment journals were one example praised as providing some context to the overall learning process, and were viewed as a fairer method of being graded than on one final performance alone. Marie: I like the fact that we are getting marked on our progress now, and that’s 25 per cent of our mark, which I think is extremely reasonable. The performance itself is just a really small snap shot of your whole degree. So you might do something terribly wrong on the day. I

11

guess that’s the nature of performance, but there’s months of work that goes into it, which is now being assessed for us, but never used to be. Ashley: Overall [progressive assessment] can be good, when it's working really well. If you're making progress—it's a fairer system. In the sense that it's not all of your eggs in the one basket, where you get to the end of the course and you perform. It trains a level of constant application, which is what's required for the industry. Students had experienced a peer assessment exercise in which the process is being continually refined. Students were able to identify teething problems and also see beyond the initial challenges to prospective advantages of such strategies. Performance students valued peer feedback in particular. Student comments correlate with the literature suggesting that peer assessment enhances learning most effectively when it does not involve grading. Ashley: I think peer assessment could be successful if it was either no marks or a very small weighting or with potentially a pass/fail outcome. Marking is hard, full stop. So you can't throw students into a marking situation, where they aren't given the preparation on how to mark. Try it with no marks attached. So people are just committed to the learning outcomes, and learning something from it. Fears about a lack of experience in peer assessment techniques and consequential unfair assessment outcomes were expressed by all students. Acknowledging their own deficiencies in this area and the impact of friendship and subjectivity, the students’ comments support the research of Daniel (2004) and White (2009) in this area. Peter: It depends how many peers are assessing you. With two people—that's not enough. Maybe if it was like the whole class, you get an average. That way if you have someone who doesn't like you and someone who likes you—similarly if you're a nice person, you don't want to fail somebody. Terry: Great care has to be taken with how you do it. You need to at least run through the students, how to mark something. Because without that it's just a pretty useless guessing activity for most people. So if it's done well, yes, I think it's important. Ashley: I would say peer assessment should be done with such great caution, such a great deal of preparation and thought. It has to be about what do we actually want out of this peer assessment? It needs to be well beyond just the task that is being assessed. It needs to be— which should have just as much importance—is how to think critically, how to assess.

12

Students were particularly vocal about the advantages of self-assessment and how they wanted to develop self-evaluation skills in order to advance their learning. As Bryce (1997) notes, students view the learning process as ongoing in which they cyclically interpret the information they receive and use it to improve their performance. By developing skills in self-assessment, the students recognised the importance of becoming their own “first marker” (Lebler, 2008 p. 4) so to be in charge of assessing their work in the future as professionals. Peter: I'm encouraged to do it a lot by my major study teacher. She's a big fan of recordings recording lessons, recording just when you're practising. So you have that immediate feedback and you can assess yourself while doing that. I found that's been incredibly helpful. James: In terms of actually sitting down and assessing yourself, you cannot do that simultaneously [while] performing. It just doesn't work. So recordings really, really help, because you can look back later if you want. Your perception [during the performance] is always different to the [recorded] result. As Sadler (2013) suggests, there are three basic requirements for students to successfully selfregulate their work: an understanding and recognition of high quality work, the ability to judge with accuracy the quality of their own works and interpret external feedback, and the ability to choose from a range of strategies for closing the gap between the two. For this to occur, students need to not only need to develop efficient self-assessment techniques but also interlace such these with the teacher evaluations they receive. 4. Gaining an understanding of academic, professional, and industry standards In interviews the students demonstrated a general understanding of CAPA TLOs as basic pass level requirements, but as they wished to be working at the top of their field they emphasised that they require specific information on what constitutes a high standard. They were cognisant (and somewhat concerned) that academic and industry standards were different and did not generally correlate. Ashley: I guess for me, they show a minimum standard of objectives or standard of competency that is desired within the performance discipline, upon the completion of a degree. I honestly don't think these should set the standard. What should set the standard behind these is an ability to find the very highest levels of each of these TLOs. Because you can meet these and still have a very bad course. Marie: [the CAPA TLOS] look like the minimal basic skills we need to be able to graduate from a degree. Our industry is more focused on the excelling of skills…we should all have 13

these, but in terms of actually succeeding at our career, it's more about going above and beyond this. The student participants did not look to the CAPA TLOs, program learning outcomes, or specific grading criteria to establish academic and industry standards. It was their major study teachers which provided this frame of reference and occasionally comparisons to the standard of other students in their degree. Peter: Our major study teachers know [the industry standard] through their own experience. I think that's more helpful then a base graduate outcome. Marie: I think in terms of performance, my major study teacher gives me a very good indication of where I should be. Sadler (2005) has called for a major shift towards standards-referenced grading as a more tangible way of establishing quality. Exemplars were viewed by students as a good option for solidifying clarification of a standard, although quantifying levels of performance was deemed challenging considering the many variations in performance specifics and composition styles. This correlates with Sadler’s (2005) observation that students require, in addition to criteria, exemplars of varying standards to fully aid their appreciation of quality. Peter: [Name of lecturer] brought in one from last year and showed us, this is the standard you need to meet. So it's really helpful if there's an essay or something you have to follow. Marie: Yeah, for performances, but it's quite hard to actually get an example of [specific standards]. Everyone is so different in how they perform. 5. Subjectivity in assessment practices Hay and MacDonald (2008) note that students report concerns about subjectivity in the evaluation of creative works, as internalised, personalised criteria often serve as alternatives to specified criteria and which are not made known to students; leading to confusion over how the creative process is assessed. The students participating in focus group sessions were indirectly aware of such tacit knowledge in grading and expressed concerns as noted by Hay and MacDonald (2008). Terry: It's difficult to start to quantify something that's creative—because it can be very subjective and since it's an art, it's not a science. James: The creative element—it's 80 per cent of the mark. So it's like what they're trying to do there is mark how I was thinking when I wrote it, and that's really difficult to do. 14

In suggesting that only the creator can fully comprehend and assess the creative process, these comments echo recommendations by Cowdroy and De Graaff (2005) that using student-derived criteria to support student-based assessment of creative conceptualisation may be a more effective method for assessing creativity. 6. The role of holistic evaluations of quality In the creative arts, teachers’ professional judgements often include a holistic appraisal in addition to criteria-based evaluations of performances and creative works. All students acknowledged the necessity for a holistic evaluation of quality in addition to specific criteria. James: I think that's important. To look at a piece overall and say is it effective? Because you can have a bunch of really cool [individual] elements in it, but does it make a good piece of music? Students were open to approaches of performance assessment which implemented complementary aspects of holistic and criteria-based methods (McPherson & Thompson 1998; Stanley, Brooker & Gilbert, 2002). Sadler promotes such a process as encouraging students to develop holistic evaluative skills reduces the need for teacher-derived feedback (2009). Students were even willing to sacrifice high grades knowing that the individual criteria may not have been met, but that their growth as overall musicians and an improvement in overall quality were more important goals. Peter: So there are some subjects where the end is the means. You get a good mark because you put in the work—just you're meeting what is needed. In major [performance] study, it's a little bit different… the assessment is a requirement, but it's not always the most important thing. It's about the skills that you're learning. Marie: Let's sacrifice a few marks in this piece of assessment, for the player that I want to be in a few years time. I am willing to say I won't get a HD [high distinction] for this, but the skills that I'm learning are more important to me than my mark. Themes from teacher focus group sessions Results from focus group sessions with the students’ teachers provided further insight into the perceived relationship and relevance of the six CAPA TLOs to current assessment practices. Coding of student focus group transcripts has yielded nine key issues, four of which are explored below: 1) Relationship of current assessment practices to TLOs, 2) The role of

15

teacher’s standards in assessing student performances, 3) Importance of course profiles in assessment, and 4) Recent changes in the assessment of music performance. 1. Relationship of current assessment practices to TLOs

In discussing how their assessment practices measured achievement of the CAPA TLOs all interviewees agreed that current methods of assessment successfully addressed the CAPA TLOs in an ongoing, developmental manner. Richard: I think we do well, both in our major study and in some of the supporting studies courses that realise creative product. For us it’s all got to do with implementing action research paradigms, which is what is tacitly hinted at here. Daniel: Sure, we do a lot of those. It's ongoing every semester. You could quantify the extent of the assessment, but I think we assess against this criteria in an ongoing way. During discussions on this topic teachers displayed some concern about appropriately interpreting the degree to which students could be seen to demonstrate learning outcomes, especially as students entered QCGU with varying levels of skills, knowledge and prior performance experience. Nikki: [the TLOs] could be done on so many different levels. It actually could be a very shallow level and still qualify for these categories. Or it could be something in great depth. It says demonstrate skills, but it doesn't say to what level. Daniel: Some come in here more equipped to begin with. There’s an entry requirement, but when they eventually turn up, they've got very varied backgrounds. Acknowledging the various levels of student competency, teachers questioned the extent to which the CAPA TLOS represented a professional industry standard – a theme also explored by the student focus groups. Richard: So is it a competency post a bachelor's degree? Or is it a competent industry standard criteria? Zoe: [in the TLO document] there was an expectation of when a student finished a Masters that they would be at a professional level. So the inference of that is if you back track two or three years that we should not be expecting them to be at a professional standard at the end of their degree. While the TLO document suggests that graduates from degrees in the creative and performing arts go on to become practice-led “front-line professionals” (2010, p. 11) and are

16

prepared to “practise as a professional in the field” (2010, p. 13), the TLO outcome statements are not specifically identified as equating to professional standards at either the Bachelor or Masters levels. The very notion of “professional standards” is contested in those disciplines that do not have professional bodies whose responsibility it is to ensure that all accredited practitioners in a given field are demonstrably competent to practice, such as is the case for professions like medicine or teaching. Nevertheless, teachers expressed their firm understanding of a professional standard and that students also tended to judge themselves against the standard of those working in the industry. 2. The role of teacher’s standards in assessing student performances

In addition to course criteria and the CAPA TLOS, teachers were cognisant of the significant role that their own experience as professional musicians played in assessing their students. Responding to the question: “To what extent do you rely on the experience of professional performance expectations and previous student performances at the same year level when you go about grading?” the focus group participants offered these comments: Cheryl: I think we all mark according to our own professional experience—we're all very high level professionals—and what we've done as educators over a number of years. Nikki: We’re provided with a list of criteria but we also bring to it our own intrinsic internal criteria. We're wrestling with the internal and external and trying to make sense of it all. Teachers appreciated that Griffith University uses criteria-based assessment strategies and therefore their main task was to assess students’ performances against specific predetermined criteria. Comparisons to professional standards and experiences in assessing similar performances in previous years did play an important role in grading, even though such factors were not explicitly stated in the ECPs nor in the CAPA TLO statements. Interviewees did not indicate that this was a conflict and reference to consistency in standards is included in the Griffith University Assessment Policy (Griffith University, 2013). 3. Importance of course profiles in assessment

As standards and criteria are embedded in the course profile, teachers were asked to what extent they referred to the course profile prior to awarding marks for students’ creative work. Katie: Not often. I think we've written them in most instances. Richard: If you're lucky enough to design the course and written it, then you are very involved in the criteria for assessment. So when you get a situation where the actual writing

17

of courses is done at arm's length from where you're actually delivering it, [that’s where] things go wrong, mistakes happen. As reported earlier, these statements relate to research done by Hay and MacDonald (2008) who note music teachers—in their capacity as professional musicians—tend to internalise performance criteria and standards over time. Thus, their judgments are often based on a loose reliance on the actual institutional criteria which then serve as alternatives to the teachers’ own values, beliefs and standards for performance. This cross-referencing of individuals’ professional standards with institutional standards is not necessarily problematic and may even serve to ensure the appropriateness of institutional standards in a broader professional context. A more serious problem in an environment of prescribed criteria and standards can occur when teachers take into account factors such as student behavior, attitudes, and motivation, as well as their memory of past performances, none of which should influence the assessment of a particular performance. The possibility of a certain loss of validity and reliability exists when these factors are at play. Tacit individual criteria and standards were seen to play a central part in the assessment process. While external criteria were very important, teachers often made judgments according to a student’s demonstrated ability and according to each student’s unique combination of strengths and weaknesses, as is recommended by Sadler’s concept of backwards assessment in Chapter 2. In general, given adherence to both external and internal subjective criteria, interviewees granted that gross disagreement in panel assessment almost never occurred, and that when significant differences were noted, consensus was always reached through discussion. 4. Recent changes in the assessment of music performance In reflecting on the skills and knowledge stated in the CAPA TLOs, teachers felt it was important to highlight the influence of recent changes to assessment processes on the ways they currently assess their students. Katie: Over the last 10 years I've noticed that the university has a lot more emphasis on assessment and there's a lot more information on different types and approaches to assessment and why we assess and how we assess. I've noticed I move towards more formative assessment. Ten years ago I wouldn't even know what that was, much less why it was relevant.

18

Simon: I think that that one size fits all approach is actually disadvantaging us because in all our separate areas we are very aware of the needs of our students to develop who they are and what they can do. Equality—that's where one of the issues: that you couldn't have a singer singing for that length of time, whereas you could have an instrumentalist. This conversation lead to further discussion on issues of balancing more recent criteria-based assessment practices with traditional forms of holistic assessment in the creative and performing arts. Zoe: I think that the research is showing that the fact that Conservatoriums have had to adapt to university models has meant that it's become much more rigorous in ticking the boxes and doing the criteria, and moving away from that traditional model of discussing and moderation in a panel. I hold on very strongly to the Gestalt idea of the whole being different from the sum of the parts. Really, that's where some of us go ‘oh’, I think that that’s a ‘distinction’ student. Richard: If you actually gave them marks for everything—there's so much bleeding between all of the different elements. A singer's intonation could be problematic because their breathing is not right. So you don't give them a mark for breathing and give them a mark for intonation. There's too much that is actually enmeshed. So the idea that we mark with this sense of Gestalt, I think, is a very old principle for performing arts. I feel very comfortable with that idea. Cheryl: The semantics is a primary issue in interpreting what's in front of you in terms of criteria. Everybody is going to interpret them differently because they're going to understand the semantics differently. So there would have to be a lot of work put into constructing criteria that could come anywhere close to our internal experience of listening and coming to those judgements in the holistic way. In the creative arts, teachers’ professional judgements often include a holistic appraisal in addition to criteria-based evaluations of performances and creative works. Sadler (2009) notes that one of the challenges of using explicit grading models is that “experienced assessors often become aware of discrepancies between their global and their analytic appraisals” (p. 7). McPherson and Thompson (1998) and Stanley, Brooker and Gilbert (2002) suggest an approach to performance assessment which implements aspects of both holistic and criteria-based methods “to ensure reliable and valid evaluations of student achievement” (Blom & Poole, 2004, p. 115). In Chapter 11, Newsome also advocates a combined

19

holistic/criteria approach, and in Chapter 9, Blom, Stevenson and Encarnacao describe a criteria approach that is accommodated in rubrics. Conclusions Results from interviews with Bachelor of Music students at the Queensland Conservatorium reveal that assessment plays a key role in their journey to identify and work towards the highest standards of creative expression. It is evident that students are interested in assessment enhancing their experience as musicians and in evaluating their own work as professionals; and that they have strong views on what they believe to be fair and valid assessment practices. They are—as Sadler (2013) reminds us—capable of efficiently recognising quality in a holistic manner and then decomposing their judgments to extract relevant and valid criteria. The data suggests, however, that the students in this study tend to rely heavily on teacher-derived micro criteria as essential evaluation support, consequently diminishing their experience in developing critical appraisal skills, and ultimately their role as collaborators in the assessment process. Students viewed alternative forms of assessment (such as peer, self- and progressive assessment practices) as positive alternatives to traditional authoritarian tasks, and identified them as effective ways for them to begin to develop skills in assessing quality in creative works. They also recognised the importance of being able to interpret teacher feedback and interlace this with their own self-assessment to work towards desired outcomes. While the teachers felt that current music performance assessment tasks addressed the TLOs effectively, ascertaining what constitutes a professional standard and the place of professional standards in the assessment process was something teachers felt was important to address within the context of their institution. This was particularly relevant when students entered the university with varying levels of skills and performance experience. Teachers acknowledged that they brought professional standards as well as their knowledge of previous year’s performance standards into their assessment of student work. As students valued their teachers’ experience in the industry and aimed to develop skills to a high industry standard this was seen as relevant and advantageous by both groups of participants. Recent changes from holistic to criteria-based assessment were addressed by students and teachers, with both groups agreeing that a balance between these two types of assessment is essential for fair and valid assessment of musical performances. The abundance of pertinent information garnered from tertiary music students and their teachers in this study has implications for the design and application of assessment in the 20

creative and performing arts. Taking note of how students and teachers perceive and implement traditional and innovative assessment strategies, and the role they play in students’ growth as evaluators of quality, is critical to designing more effective, relevant and valid assessment tools. Reference List Blom, D. A., & Poole, K. (2004). Peer assessment of tertiary music performance: Opportunities for understanding performance assessment and performing through experience and self-reflection. British Journal of music Education (21)1, 111–125. doi: 10.1017/S0265051703005539 Bostock, S. (2000). Student peer assessment. A workshop at Keele University, May 2000. Boud, D. (1995). Enhancing learning through self-assessment. London: Kogan Page. Boud, D. (2000). Sustainable assessment: rethinking assessment for the learning society. Studies in Continuing Education, 22(2), 151–167. doi:10.1080/713695728 Brown, G., & Hirschfeld, G. (2008). Students’ conceptions of assessment: Links to outcomes. Assessment in Education: Principles, Policy & Practice, 15(1), 3–17. doi: 10.1080/09695940701876003 Bryce, J. (1997). Evaluative assessment to enhance student learning. Counterpoint: Australian Council for Educational Research, 25–31. doi: 10.1080/09695940701876003 Butcher, A. C., & Stefani, L. J. (1995). Analysis of peer, self- and staff-assessment in group project work. Assessment in Education, 2(2), 165–186. doi: 10.1080/0969594950020204 Cowdroy, R., & De Graaff, E. (2005). Assessing highly-creative ability. Assessment & Evaluation in Higher Education, 30(5), 507–518. doi: 10.1080/02602930500187113 Cox, J. (2010). Admission and assessment in higher music education. AEC Publications. Daniel, R. (2004). Peer assessment in musical performance: the development, trial and evaluation of a methodology for the Australian tertiary environment. British Journal of Music Education. 21(1), 89–110. doi: 10.1017/S0265051703005515 Fautley, M. (2007). Assessment for learning in music. Retrieved from: http://www.teachingmusic.org.uk/

21

Griffith University Assessment Committee (January, 2013). Assessment Policy. Retrieved from: http://policies.griffith.edu.au/pdf/Assessment Policy.pdf. Hay, P., & Macdonald, D. (2008). (Mis)appropriations of criteria and standards-referenced assessment in a performance-based subject. Assessment in Education: Principles, Policy & Practice, 15(2), 153–168. doi: 10.1080/09695940802164184 Hodgeman, J. (1997). The development of peer and self assessment strategies for a design and project-based curriculum. Retrieved from: htm://ultibase.rmit.edu, au/Articles/dec97/hodgml.htm Holmes, J. & Fountain, W. (2010). Learning and teaching academic standards project. Creative and performing arts. Learning and teaching academic standards statement. Learning and Teaching Academic Standards Project. Sydney: Australian Learning and Teaching Council. Majdoddin, K. (2010). Peer assessment: An alternative to traditional testing. Modern Journal of Applied Linguistics 2(5), 396-405. Marton, F. & Säljö, R. (1997). Approaches to learning. In: F. Marton, D. Hounsell, & N. Entwistle (Eds.), The experience of learning. Implications for teaching and studying in higher education (pp. 39–59). Edinburgh: Scottish Academic Press. McLaughlin, P., & Simpson, N. (2004). Peer assessment in first year university: How the students feel. Studies in Educational Evaluation 30, 135–149. McMahon, T. (1999). Using negotiation in summative assessment to encourage critical thinking, Teaching in Higher Education, 4(4), 549–554. McPherson, G., & Thompson, W. (1998). Assessing music performance: issues and influences. Research Studies in Music Education, 10 (June), 12–24. doi: 10.1177/1321103X9801000102 Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. doi: 10.1080/03075070600572090 Race, P. (1998). Practical pointers in peer assessment. In S. Brown, (Ed.), Peer assessment in practice (pp. 113–122). Birmingham: SEDA.

22

Rainsbury, E., Hodges, D., Sutherland, J., & Barrow, M. (1998). Academic, employer and student collaborative assessment in a work-based cooperative education course. Assessment and Evaluation in Higher Education, 23(3), 313–325. doi: 10.1080/0260293980230307 Rust, C., O’Donovan, B., & Price, M. (2005). A social constructivist assessment process model: How the research literature shows us this could be best practice. Assessment & Evaluation in Higher Education, 30(3), 231–240. doi: 10.1080/02602930500063819 Sadler, D. R. (2005). Interpretations of criteria-based assessment and grading in higher education. Assessment and Evaluation in Higher Education, 30(2), 175–194. doi: 10.1080/0260293042000264262 Sadler, D. R. (2009). Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education, 34(2), 159–179. doi: 10.1080/02602930801956059 Sadler, D. R. (2010). Beyond feedback: developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535–550. doi: 10.1080/02602930903541015 Sadler, D. R. (2013). ‘Opening up feedback: Teaching learners to see’. In S. Merry, M. Price, D. Carless & M. Taras (Eds.), Reconceptualising feedback in higher education: Developing dialogue with students (pp. 54–63). London: Routledge. Sambell, K., McDowell, L., & Brown, S. (1997). ‘But is it fair?: An exploratory study of student perceptions of the consequential validity of assessment. Studies in Educational Evaluation, 23(4), 349–371. doi:10.1016/SO191-491X(98)00012-1 Somervell, H. (1993). Issues in assessment, enterprise and higher education: the case for self, peer and collaborative assessment. Assessment and Evaluation in Higher Education, 18(3), 221–233. doi: 10.1080/0260293930180306 Spiller, D. (2011). Assessment Matters: Self-assessment and peer assessment. Teaching Development Department, University of Waikato, N.Z. Stanley, M., Brooker, R., & Gilbert, R. (2002). Examiner perceptions of using criteria in music performance assessment. Research Studies in Music Education, 18, 43–52. doi: 10.1177/1321103X020180010601

23

Stefani, L. (1998). Assessment in partnership with learners. Assessment and Evaluation in Higher Education, 23(4), 339–350. Struyven, K., Dochy, F., & Janssens, S. (2005). Students’ perceptions about evaluation and assessment in higher education: A review. Assessment & Evaluation in Higher Education, 30(4), 325–341. doi: 10.1080/02602930500099102 Wen, M. L., & Tsai, C-C. (2006). University students’ perceptions of and attitudes toward (online) peer assessment. Higher Education, 51(1), 27–44. doi: 10.1007/s10734-0046375-8 White, R. (1994). Commentary: Conceptual and conceptional change. Learning and Instruction 4(1), 117–121. doi: 10.1016/0959-4752(94)90022-1 White, E. (2009). Student perspectives of peer assessment for learning in a public speaking course. Asian EFL Journal 33(1), 1–36. Vu, T. T., & Dall’Alba, G. (2007). Students’ experience of peer assessment in a professional course. Assessment & Evaluation in Higher Education, 32(5), 541–556. doi: 10.1080/02602930601116896.

24