Effects of Assessment on Collegiate Recreation ...

14 downloads 0 Views 142KB Size Report
Dallin George Young, Laura A. Dean, Douglas Franklin, ... Young is with the National Resource Center for The First-Year Experience and Students in Transition,.
Recreational Sports Journal, 2014, 38, 82-95 http://dx.doi.org/10.1123/rsj.2013-0023 © 2014 NIRSA Foundation

Official Journal of the NIRSA Foundation www.RSJ-Journal.com

ORIGINAL RESEARCH

Effects of Assessment on Collegiate Recreation Programs Dallin George Young, Laura A. Dean, Douglas Franklin, and William Kyle Tschepikow Collegiate recreation professionals were surveyed to determine whether, how regularly, and by what means they were conducting assessment. This descriptive study explored current assessment practices; surveyed resources used in assessment, including specific attention to CAS materials; and reported outcomes of assessment activities. Results indicated that more than 90% of respondents were engaged in assessment, regardless of institution type. Professional literature, professional development activities, and assessment teams were the most frequently used resources; approximately 40% reported using CAS materials. Most frequently reported outcomes of assessment were generally related to student staffing. Respondents also indicated that mission statements and professional staffing changed less frequently as a result of assessment efforts. Implications for the practice of assessment in collegiate recreation programs are discussed. Keywords: outcomes, recreation standards

The call to assess and evaluate programs and services in higher education has been heard with increasing frequency across recent years (Astin & Antonio, 2012; Upcraft & Schuh, 1996). Professionals across campus have been challenged to evaluate their programs and assess the student learning and development outcomes resulting from their work, and those in collegiate recreation programs share in this challenge (Haines & Davis, 2008). The Council for the Advancement of Standards in Higher Education (CAS) first published standards and guidelines for recreational sports programs in 1986 (CAS, 2012), and subsequent revisions have kept the materials current. Through membership in CAS, the National Intramural and Recreational Sports Association (NIRSA) has collaborated with professionals across other functional areas to develop and disseminate standards of practice. NIRSA is the leading and largest professional association of collegiate recreation professionals with over 4000 members serving 7.7 million students (National Intramural and Recreational Young is with the National Resource Center for The First-Year Experience and Students in Transition, University of South Carolina. Dean is with the Dept. of Counseling and Human Development Services, University of Georgia. Franklin is with the Dept. of Campus Recreation, Ohio University. Tschepikow is with the Dept. of Student Affairs Assessment, University of Georgia. Address author correspondence to Dallin Young at [email protected]. 82

Assessment in Collegiate Recreation   83

Sports Association, 2013). The awareness and use of CAS standards for recreational sports are primarily associated with programs that are located in campus divisions of student affairs (Franklin, 2007), which make up 75% of NIRSA affiliated programs (Stier, Schneider, Kampf, & Gaskins, 2010). Although CAS standards for recreational sports have been available since 1986, and in spite of the call for assessment, little research has been conducted to examine the outcomes of assessment activities or the effects of using the CAS standards to guide assessment efforts. The current study was designed to examine the extent to which professionals working in collegiate recreation are assessing their programs, the resources being used, the effects of assessment activities, and the specific effects related to using CAS materials in assessment. For the purposes of this study, the terms collegiate recreation, campus recreation, and recreational sports are used interchangeably. Having a clear understanding of the effects of assessment can assist practitioners in planning effective and productive assessment projects for their programs and in evaluating their local results in relation to a larger context.

Review of the Literature The first step in planning an assessment project is to determine criteria (Henderson & Bialeschki, 2002). “Evaluation refers to making decisions based on identified questions or criteria and supporting evidence . . . the goal of evaluation is to determine ‘what is’ compared ‘what should be’” (Henderson & Bialeschki, 2002, pp. 2–3). One way of identifying ‘what should be’ is to use standards of practice that describe recommended ways of organizing and managing programs and services. For over 40 functional areas in higher education, the CAS standards and guidelines offer such recommendations (CAS, 2012). Arminio and Gochenaur (2004) surveyed over 5,000 individual members from 22 CAS member associations; they found that 62.5% of the respondents had heard of CAS. The survey also asked in what ways CAS had positively influenced programs; the responses included assessment, justification, and expansion of current programs; clarification of mission and goals; emphasis on training of student staff; guidance for new programs; and influence on budgets. This survey did not, however, explore the specific effects of using CAS materials for assessment. Creamer (2003) called for additional research on the influence of CAS and its initiatives. He identified a number of areas for further inquiry, such as exploration of the ways that the CAS standards shape professional practice, including assessment and evaluation. Tschepikow, Cooper, and Dean (2010) surveyed student conduct professionals to explore the effects of assessment and the use of CAS in that functional area. Of 115 respondents, 72% indicated that they were assessing their student conduct programs. Professional development was the most frequently cited resource used to support assessment practice (61%), and just over half (52%) of those who were conducting assessment reported using CAS materials in their processes. Respondents were also asked about the outcomes of their assessment activities. The most frequently reported was a change in existing learning and development outcomes (40%), followed by change in existing policies regarding sanctioning (39%), modification of student staff training process (39%), and change in existing code of conduct (38%). When the researchers looked at the relationship

84  Young et al.

between assessment outcomes and use of CAS materials, they discovered that using CAS was significantly related to creating learning and development outcomes for the first time and to establishing a comprehensive assessment plan. Within campus recreation, professionals have looked to colleagues for guidance about best practices. “The field of campus recreation has a rich history of practitioner-developed knowledge dispersed through an apprenticeship model that facilitated the growth of the profession from the early 20th century” (Franklin, 2008, p. 305). While this tradition continues, Franklin (2008) pointed out that “the complexity of campus recreation and the increased pressure to be more productive and accountable require a thoughtful and systematic approach to its practice” (p. 306). Cissik (2009) suggested that assessment in campus recreation programs has been around for a significant time but limited to “participation, satisfaction, and use” (p. 2), while assessment focused on learning outcomes has been difficult for recreational sports professionals. However, Carr and Hardin (2010) offer that while “all recreational sport administrators can think of opportunities for learning based intervention” (p. 139), assessing achievement of those learning outcomes in quantifiable terms has been difficult. Haines and Davis (2008) described a comprehensive approach to campus recreation assessment, suggesting that all components—programs, facilities, management, and operations—must be studied. Such a broad approach creates the foundation for a valid assessment that can demonstrate the contribution that a recreation program makes to an institution and its students. Although the literature contains a number of sources that support the importance of assessment in collegiate recreation and describe its essential components, as outlined above, there has been little empirical research published that describes the practice of assessment in this field or the outcomes of that practice.

Methods The present study was a descriptive study surveying recreational sports professionals about the overall effects of assessment and evaluation activities in the programs on their campuses. Specifically, it was designed to answer the following research questions: 1. To what extent are higher education professionals who work in recreational sports assessing their programs? 2. What resources are higher education professionals who work in recreational sports using to assess their programs? 3. What are the outcomes of past assessment activities among higher education professionals who work in recreational sports? 4. Does the use of CAS materials have an effect on outcomes from past assessment activity performed by higher education professionals who work in recreational sports?

Sample To reach the individual on the campus who would be in the best position to answer questions about assessment activities in recreational sports programs, we worked

Assessment in Collegiate Recreation   85

with NIRSA to identify a sample of directors of campus recreational sports programs. Directors were selected for two reasons. First, people with the title of director are most likely to be in an organizational position to have knowledge of the extent of assessment activities going on across all organizational subunits. Second, this approach would solicit one response per institution. An e-mail invitation to participate in this study was sent to 633 directors at NIRSA member schools. Reminder emails were sent out to encourage further participation over the two-week data collection period. This process yielded 206 responses, a 32.5% response rate.

Questionnaire The study used a questionnaire designed to explore assessment practices in recreational sports programs. In particular, this questionnaire was designed to gather information about the extent to which recreational sports administrators use professional standards to assess their programs and the outcomes of these assessment efforts. Items on the questionnaire included a) questions about the resources used to guide the assessment, b) the use of materials provided by CAS, c) potential outcomes from past assessment based on the CAS standards, and d) demographics about the institution and the respondent. Terms used in the questionnaire were defined for respondents. The term “assessment” was explained using Upcraft and Schuh’s (1996) definition (any effort to gather, analyze, and interpret evidence that describes institutional, divisional, or agency effectiveness). We also provided definitions for terms related to materials created by CAS, such as CAS Self-Assessment Guides, CAS Standards, and the CAS “Blue Book.” To provide further clarity, we provided a definition of recreational sports programs based on the definition included in the CAS standards for recreational sports programs (CAS, 2012). Recreational sports programs were defined on the questionnaire as any aspect of the primary functional unit, department, or agency in a division of student affairs responsible for enhancing students’ fitness and wellness. To enhance the face validity of the questionnaire, the research team asked a representative from NIRSA to provide feedback on content and response options. Based on this expert feedback, wording was adjusted to be more congruent with language used by professionals who work in recreational sports. Once complete, the questionnaire was constructed in an online data collection environment and a generic URL was created to include in invitations to participate in the research. Once collected, the data were manipulated and analyzed using SPSS version 19.0.

Limitations After the data were collected, we found a large number of respondents (56) dropped out of the survey at some point after the “Assessment Resources” section which reduced the number who responded to each question throughout questionnaire below the initial number of 206. This will limit the generalizability of the responses, so caution should be taken in the interpretation of the results in the demographics, use of CAS materials, and the outcomes of assessment efforts results reported here.

86  Young et al.

Results Demographics Because the demographics section was at the end of the questionnaire, the total number of respondents providing demographic information (n = 151) is lower than the total number of respondents (N = 206). The majority of respondents, 121 (80.7%) had earned a master’s degree, while 13 (8.7%) had earned a bachelor’s degree and 16 (10.7%) had earned a doctoral degree (Ed.D. or Ph.D.). The mean number of years respondents had worked in recreational sports programs was 19.7 years (SD = 10.8), with the number of years ranging between 1–43 and the interquartile range between 10–20 years. Interestingly, the mean number of years respondents report working in student affairs was 16.6 (SD = 10.8) within a range of 0–43 years and an interquartile range between 7–16.5 years. This difference is likely due to such factors as organizational location of recreational sports programs on campuses, professional identity connected to recreational sports and not to student affairs, and career histories working in recreational sports programs in institutions outside higher education. Of the 150 respondents providing information about their institutions, 102 (68.0%) indicated that they worked at a public institution with 48 (32.0%) at private institutions. There was a positive relationship between institutional size and number of participants in the sample. The largest group of respondents, 64 (42.4%) came from large institutions (enrollment of more than 15,001 students), followed by 48 (31.8%) from medium-sized schools (between 5,001–15,000 students). The group with the fewest respondents was small institutions (fewer than 5,000 students enrolled) with 39 (25.8%).

Extent of Assessment Activity To understand the extent of assessment activity, we asked respondents to indicate whether they were assessing any elements of their recreational sports programs. Of the 206 programs, 188 (91.3%) indicated that they were conducting assessments of their programs, while the remaining 18 (8.7%) responded that they were not. When the responses are disaggregated by institutional control, a slightly higher percentage of programs at public institutions (90.2%) are conducting assessment than at private institutions (83.3%). A chi-square test revealed that this difference is not statistically significant (χ2(1) = 1.456, n = 150, p = .228). These estimates, broken down by institutional control, are surely lower than the actual percentages due to the fact that 56 respondents that responded “yes” to the question on conducting assessment dropped out of the survey before responding to the demographic section of the questionnaire. Respondents who answered that they were not assessing their recreational sports programs were asked to indicate reasons for not conducting assessment. The three most frequently cited reasons for not conducting assessment were a) limited time (50.0%), b) limited resources (50.0%), and c) planning to but have not yet (50.0%). Table 1 contains the list of reasons for not conducting assessment along with the reported frequencies for each reason. Respondents were given the chance to include other reasons why they were not conducting assessment. Such responses included that the assessment had been conducted in the past, lack of institutional motivation, lack of expertise, and lack of interest in assessment results.

Assessment in Collegiate Recreation   87

Table 1  Reported Reasons for Not Conducting Assessment, Sorted by Frequency (n = 18) Reason

n

%

Planning to but have not yet

9

50.0

Limited resources

9

50.0

Limited time

9

50.0

Lack administrative support

5

27.8

Lack assessment expertise

4

22.2

Other

4

22.2

Not sure where to start

1

5.6

No incentive

0

0.0

To further understand the extent of assessment activity, participants were asked to indicate if there was someone on staff in their recreational sports program whose designated responsibilities include assessment. Of the 148 responses, 76 (51.4%) indicated that there was a person on staff with assessment responsibilities while 72 (48.6%) replied that there was not. The fact that only 51% reported that there was a person on staff with these responsibilities, compared with the over 90% of institutions reporting that they were conducting assessment, indicates that much assessment activity is being carried out by individuals as an added responsibility or by individuals or organizations external to the department.

Use of Assessment Resources Respondents were asked to indicate which resources they have drawn on to guide their assessment efforts. Professional literature was cited by 94 (50.0%) of the respondents conducting assessment as the most frequently reported resource used to guide assessment. Nearly half of respondents (48.9%) indicated that they relied on professional development such as a webinar, conference, or a workshop as a resource. An identical number of directors (48.9%) reported relying on a student affairs assessment team or office to support their assessment efforts. Table 2 contains a list of all resources and the reported frequency with which they were used to guide assessment. The responses to the “other” option were also illustrative of resources that were drawn on to conduct assessment of recreational sports programs. These responses included the use of the American College of Sports Medicine Health/ Fitness Facility Standards and Guidelines (ACSM, 2006); nationally available assessment tools, such as studentvoice (now CampusLabs), WEAVEonline, and the NIRSA Institutional Data Set; experience and expertise of directors and staff in conducting assessment; and other professional resources, such as assessment tools from other schools and departments, professional best practices, and the position paper Learning Reconsidered (ACPA & NASPA, 2004).

88  Young et al.

Table 2  Reported Resources Used to Guide Assessment, Sorted by Frequency (n = 188) Resource

n

%

Professional literature

94

50.0

Professional development

92

48.9

Student affairs assessment team/office

92

48.9

CAS Standards (CAS Blue Book)

56

29.8

Institutional research staff

56

29.8

CAS Self-Assessment Guide

50

26.6

Institutional assessment team/committee/office

46

24.5

External consultant

38

20.2

Other

16

8.5

None

6

3.2

Training provided by regional accrediting body

5

2.7

Institutional respondents who reported conducting assessment of their recreational sports programs were asked to identify materials used to guide assessment and evaluation efforts. Of the 188 institutions that reported conducting assessment of their recreational sports programs, 76 (40.43%) indicated using CAS materials, either the CAS Standards (CAS Blue Book) or the CAS Self-Assessment Guide, to guide their assessment. When asked why they chose to use CAS materials in their assessment efforts, a majority of respondents (67.1%) indicated that it was because those materials were based on the accepted practice of the field. Approximately half of respondents reported that they used CAS materials because of the credibility of the organization (50.0%) or because of the endorsement of internal stakeholders (47.4%). Relatively few respondents indicated using CAS materials because they offer the only available standardized format for assessment (11.8%) or because of their ease of use (9.2%). Table 3 contains a full list of the frequency of reported reasons why CAS materials were used in assessment. When asked if CAS materials have become a regular part of their assessment activities, 38 directors (61.3%) indicated the materials had, while 24 indicated they had not (38.7%). Further, 59 (96.7%) directors said they would recommend CAS to a colleague in recreational sports programs at a different campus, while two (3.3%) said they would not.

Outcomes of Assessment Efforts The questionnaire included a section in which directors were asked to review a list of items related to their programs that might have changed, or not, as an outcome of past assessments. They were then asked to indicate whether change occurred for each item by selecting one of three responses: a) yes, b) no—already in compliance, or c) no—for any other reason.

Assessment in Collegiate Recreation   89

Table 3  Reported Reasons CAS Materials were Used in Assessment, Sorted by Frequency (n = 76) Reason

n

%

Based on accepted practice of the field

51

67.1

Credibility of CAS organization

38

50.0

Endorsement by internal stakeholders

36

47.4

Includes learning and development outcomes

25

32.9

Standards are appropriate for our program

23

30.3

Endorsement/recognition by external stakeholders

22

28.9

Standards are comprehensive

21

27.6

Offers only available standardized format for assessment

9

11.8

Ease of usage

7

9.2

Other

5

6.6

Among the 10 items that directors most frequently reported changing due to assessment results were four that related to student staff: a) “Improved staff development for student staff” (77.8%), b) “Improved training processes for student staff” (75.4%), c) “Improved performance evaluation processes for student staff” (57.0%), and d) “Improved selection process for student staff” (50.0%). The remaining six items changed most frequently were the following: a) “Expanded program offerings” (75.4%), b) “Improved promotional materials” (58.3%), c) “The implementation of new technology to increase program efficiency” (56.3%), d) “A change in existing learning and development outcomes” (53.3%), e) “Modified budget” (53.0%), and f) “The establishment of a comprehensive assessment plan” (52.6%). The top two items for which directors most frequently reported no change because their programs were already in compliance had to do with the mission statement, both the creation of (75.5%) and the change to (58.7%) a mission statement. Rounding out the top five outcomes where no change was reported because they were in compliance were: a) “Improved procedures for participant use of equipment” (55.6%), b) “Improved selection process for professional staff” (51.9%), and c) “Improved evaluation process for professional staff” (51.4%). Of note is that among the top 10 most frequent responses to this category were four relating to human resources (selection process for professional staff, evaluation process for professional staff, and staffing patterns for student and professional staff), indicating that no change was needed in these outcomes as they were deemed adequate through the assessment process. The third option given to directors was that the specified outcome did not occur because of any reason other than already being in compliance. On only two of the outcomes did more than 50% of responding directors report that the outcome did not occur for any other reason: a) “The hiring of an external consultant” (57.3%) and b) “An increase in professional staff salaries” (56.0%). The outcome with the next highest frequency was “Modified budget” with 29.9% of directors indicating that it was not an outcome because of any other reason, over 26% lower than the previous outcome.

90  Young et al.

Influence of the Use of CAS Materials on Outcomes of Assessment Efforts To understand the effect of the use of CAS materials on outcomes, the research team identified the group of respondents that indicated using CAS materials and the group that did not. After chi-square analyses to determine if there were differences in reported outcomes by the use of CAS materials, we found significant differences in 11 out of 30 outcomes. Analyses showed significant results for two additional outcomes (“improved staff development for student staff” and “improved training process for student staff”); however, the counts in two of the six cells were below five, which resulted in a violation of the assumptions of the statistic that at least 20% of the cells have an expected count of five or more. Thus, these outcomes were not reported as having a significant difference by CAS usage. An examination of the 10 significant differences based on the use of CAS standards shows that the differences reflect areas where use of CAS more frequently leads to changes and areas where use of CAS is related to already being in compliance. Only one outcome fell into the first type of difference, more “yes” for programs that used CAS materials and more “no—already in compliance” and “no—any other reason” for programs not using CAS materials in assessment: “Improved policies that address cultural diversity issues.” Thirty-one (55.4%) directors who had used CAS materials in assessment indicated that this was one of the outcomes of past efforts. Of the directors who did not use CAS materials, 33 (50.8%) responded that this was not an outcome because there was no need to, and 18 (27.7%) responded that this was not an outcome for some other reason. The second type of significant difference in prevalence of outcomes by use of CAS standards was the most common: where more programs using CAS materials indicated that the outcome did not occur because they were already in compliance and those programs that did not use CAS materials indicated that the outcome did occur. The five outcomes for which this was the case were a) “Improved staffing pattern for student staff,” b) “Improved promotional materials,” c) “Improved staffing pattern for professional staff,” d) “Revised organizational chart,” and e) “The implementation of new technology to increase program efficiency.” Three outcomes were of the third pattern of significant differences, where more programs that used CAS materials in assessment efforts responded that the outcome occurred and that the outcome did not occur because it was already in compliance. These outcomes were: a) “Improved performance evaluation process for student staff,” b) “Improved selection process for professional staff,” and c) “Improved staff development for professional staff.” The final pattern of significant difference was evident for just one outcome. Respondents indicated a change for the item “Improved evaluation process for professional staff” with equal frequency among programs that did and did not use CAS materials. However, those programs that indicated using CAS materials more frequently reported that the outcome did not occur because they were already in compliance. It is worth noting that among the 10 outcomes where response patterns were significantly different, eight were outcomes related to human resources. In addition, in nine of the 10 significant outcomes, programs that used CAS materials more frequently reported that they made no change because they were already in compliance with the standard.

Assessment in Collegiate Recreation   91

Discussion and Implications Collegiate recreation professionals have heard and responded to the challenge to assess programs and services. More than 90% of respondents reported conducting assessment; among those who did not, limited time and resources were the primary reasons cited. Notably, only about half of respondents (51%) indicated that there was a person on staff charged specifically with assessment as part of the job description, demonstrating that assessment activity is often being carried out by individuals as an integrated part of their jobs rather than as a separate responsibility. This is consistent with the emphasis across higher education that assessment should be a regular part of our work, rather than something added on separately (Astin & Antonio, 2012). The extent to which respondents reported conducting assessment is greater than reported by a previous, similar study focused on student conduct, in which just 72% reported conducting assessment (Tschepikow, Cooper, & Dean, 2010). If the respondents in the current study are representative of the field, collegiate recreation has embraced the call to assess at a higher rate than some campus colleagues. When asked about the resources used to support the practice of assessment, respondents reported relying heavily on professional literature and professional development, as well as a campus assessment team or office. For professional associations such as NIRSA, and for campus divisions planning or funding professional development opportunities, this is important information. Practitioners are looking to literature and training to support their high level of assessment activity. Somewhat less than half of the respondents (40%) used CAS materials in assessment. The usage rate of CAS materials may have been lower due to programs not being organizationally housed within student affairs (Franklin, 2007). However, only 25% of programs fall into this category (Stier, Schneider, Kampf, & Gaskins, 2010). This does not fully explain why six out of every 10 campus recreation programs do not include CAS materials in their assessment efforts. Of those who use CAS materials, about two-thirds of them (67%) said that it was because those materials represent the accepted practice of the field. Other important reasons included the credibility of CAS and the endorsement of internal stakeholders. This may represent the extent to which individual campuses or departments are aware of and supportive of CAS materials. Of those who have used CAS, most (61%) reported that CAS materials have become a regular part of their assessment activities, and a large majority (97%) would recommend CAS to a colleague in collegiate recreation at a different campus. Although CAS is not the most commonly used resource for assessment, nearly all of those who have used CAS would recommend it to others.

Outcomes of Assessment Because conducting assessment takes time and effort, it is important to know about the results of that investment. After all, the goal of assessment is to gather evidence that leads to improved practice (Upcraft & Schuh, 1996). Respondents were asked about the changes that they made as a result of their assessment activities. The top outcome, reported by 75% of respondents, was expanded program offerings. Given the importance of keeping program options current and relevant, it is not

92  Young et al.

surprising that assessment often results in change in this area. Another outcome reported by a majority of respondents was a change in learning and development outcomes; again, this represents an important programmatic element, and one that is receiving significant attention in higher education today. Four of the top 10 outcomes reported related to improvements in working with student staff; these included improving selection, training, staff development, and performance evaluation. This may be because many programs have not previously focused specifically on staffing issues related to student staff and also because the CAS standards focus specifically on the importance of this group (CAS, 2012). Interestingly, processes for selection and evaluation of professional staff were often found to be solid already and not in need of change; this reinforces the idea that before assessment, the focus has tended to be on professional staff rather than student staff. The remaining six changes identified most frequently as resulting from assessment included a) expanding programs, b) improving promotional materials, c) implementing new technology, d) changing learning and development outcomes, e) modifying budgets, and f) creating a comprehensive assessment plan. Each of these was reported by over half of the respondents. While these do not easily group into a category, they do represent the kinds of wide-ranging improvements that a solid assessment can yield. The purpose of assessment is twofold: to identify what is working well and to identify what needs to be changed. For some of the outcomes listed on the survey, respondents reported that there was no need for change because their programs were already in compliance with standards. This may indicate that standards are being used to build programs or that changes had been made as a result of previous assessments. The top area where no need for change was indicated had to do with mission statements, both their creation and revision. Given the importance of mission statements in providing a solid foundation for programs (Barham & Scott, 2006; Haines & Davis, 2008), it is noteworthy that collegiate recreation programs report having solid mission statements in place. Other program elements found not to be in need of change included selection and evaluation of professional staff, as described above, and procedures for participant use of equipment. Overall, the area of human resources demonstrated existing strength. Four of the top 10 areas found with no need for changes were related to staffing practices. Given the field’s grounding in multiple theories that focus on human development and learning (Franklin & Hardin, 2008), this strength in human resources is not surprising, but rather confirms the priority that programs place on staffing practices.

Differences Attributable to CAS One of the purposes of this study was to determine whether the use of CAS in assessment yielded results that were different from those achieved using other approaches. Significant differences were identified in 11 out of 30 outcomes included on the survey. As described above, the differences reflected either higher rates of reported change on an item when using CAS or higher rates of already being in compliance with CAS standards. An examination of these differences reveals factors that are likely related both to practice in the field and to the structure of the CAS standards themselves. For example, the item identified most often

Assessment in Collegiate Recreation   93

as being changed as a result of a CAS-based assessment was “Improved policies that address cultural diversity issues.” Addressing cultural diversity is an area of emphasis across higher education today, and it certainly is emphasized within the CAS standards (CAS, 2012). Although a comparison of the various standards and benchmarks that may have been used in assessment was beyond the scope of this study, it seems likely that the CAS standards may address cultural diversity more overtly than other resources and so would generate change in this area at a higher rate. Other field-specific standards may have different emphases. For example, respondents who did not use CAS for assessment were more likely to report making changes in “implementation of new technology to increase program efficiency.” Although CAS addresses the use of technology, it seems likely that other means of assessment may focus more specifically on recreation-specific technologies and therefore generate resulting changes at a higher rate. In a number of other areas, CAS-based assessments confirmed that programs were already meeting the standards. Several of these related to existing strength in staffing practices, as noted above. Assessments reflected staffing patterns for both student and professional staff, as represented by the organizational chart, that were appropriate, as well as training for student staff that needed no change.

Implications The use of assessment is crucial in collegiate recreation (Haines & Davis, 2008), and the results of this study reveal that professionals in this field have embraced assessment at an exceptionally high rate. The challenge, therefore, is not to convince them to assess their programs, but rather to refine their choices of assessment resources and mechanisms. Collegiate recreation programs are complex, comprising a wide variety of different components and specializations (Franklin & Hardin, 2008), and there are a number of industry and professional standards available that describe good and desirable practice (Franklin, 2008). These documents include the CAS Standards and Guidelines for Recreational Sports Programs (CAS, 2012), the most recent version of which incorporated the NIRSA General and Specialty Standards for Collegiate Recreational Sports and Assessment Instruments (Franklin, 2008). As Franklin further noted, however, the NIRSA General and Specialty Standards continue to be used by many programs, particularly with reference to specialty programs. The use in assessment of different standards for different purposes no doubt leads to differing outcomes. This is not necessarily a problem; however, it is important that professionals analyze the differences in standards that they are considering as the basis for assessment and intentionally choose those that best reflect the scope and focus needed for the particular project. As Henderson and Bialeschki (2002) emphasized, the identification of appropriate criteria is the foundation of an effective assessment and evaluation process. Based on the results of this study, use of CAS materials is more likely to result in close examination of human resource-related practices, including those related to student staff. Use of other approaches to assessment, on the other hand, seems more likely to result in specific operational changes such as those related to technology, facility scheduling, or rules governing participant safety. CAS is designed to apply to a wide range of functional areas and to address issues that are necessary for good practice in any area (CAS, 2012). While specialty standards are incorporated, the

94  Young et al.

nature of the CAS standards is broad and emphasizes a comprehensive approach to program assessment. This study supports the use of CAS for assessment across a complex department, particularly when CAS has good recognition and acceptance on the specific campus. Since campus recreation administrators look to literature and professional development to support their already high level of assessment activity, NIRSA and other organizations might explore ways to help them refine the focus of assessment projects and identify the appropriate criteria to fit their needs. Assessment is happening in collegiate recreation at an exceptional rate; the next challenge is to ensure that the best tools are chosen to lead to continued program improvement.

References American College of Sports Medicine. (2006). ACSM’s health/fitness facility standards and guidelines (3rd ed.). Champaign, IL: Human Kinetics. American College Personnel Association & National Association of Student Personnel Administrators. (2004). Learning reconsidered: A campus-wide focus on the student experience. Washington, DC: Authors. Arminio, J., & Gochenaur, P. (2004). After 16 years of publishing standards, do CAS standards make a difference? College Student Affairs Journal, 24(1), 51–65. Astin, A.W., & Antonio, A.L. (2012). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education (2nd ed.). Lanham, MD: Rowman and Littlefield. Barham, J.D., & Scott, J.H. (2006). Increasing accountability in student affairs through a new comprehensive assessment model. College Student Affairs Journal, 25(2), 209–219. Carr, J.W., & Hardin, S. (2010). The key to effective assessment: Writing measurable student learning outcomes. Recreational Sports Journal, 34(2), 138–144. Cissik, J.M. (2009). Assessment and the recreational sports program. Recreational Sports Journal, 33(1), 2–11. Council for the Advancement of Standards in Higher Education. (2012). CAS professional standards for higher education (8th ed.). Washington, DC: Author. Creamer, D.G. (2003). Research needed on the use of CAS standards and guidelines. College Student Affairs Journal, 22(2), 109–124. Franklin, D.S. (2007). Student development and learning in campus recreation: Assessing recreational sports directors’ awareness, perceived importance, application of, and satisfaction with CAS standards. (Electronic Thesis or Dissertation). Retrieved from https://etd.ohiolink.edu/. Franklin, D.S. (2008). Campus recreation careers and professional standards: Industry and professional standards in campus recreation. In NIRSA: National IntramuralRecreational Sports Association, Campus recreation: Essentials for the professional (pp. 305–312). Champaign, IL: Human Kinetics. Franklin, D.S., & Hardin, S.E. (2008). Philosophical and theoretical foundations of campus recreation: Crossroads of theory. In NIRSA: National Intramural-Recreational Sports Association, Campus recreation: Essentials for the professional (pp. 3–20). Champaign, IL: Human Kinetics. Haines, D.J., & Davis, E.A. (2008). The art of assessment. In NIRSA: National IntramuralRecreational Sports Association, campus recreation: Essentials for the professional (pp. 253–266). Champaign, IL: Human Kinetics. Henderson, K.A., & Bialeschki, M.D. (2002). Evaluating leisure services: Making enlightened decisions (2nd ed.). State College, PA: Venture.

Assessment in Collegiate Recreation   95

National Intramural and Recreational Sports Association. (2013). About us. http://www. nirsa.org/about-nisra.html Stier, W.F., Schneider, R.C., Kampf, S., & Gaskins, B.P. (2010). Job satisfaction for campus recreation professionals within NIRSA institutions. Recreational Sports Journal, 34(2), 78–94. Tschepikow, W.K., Cooper, D.L., & Dean, L.A. (2010). Effects of CAS standards on assessment outcomes in student conduct programs. Journal of Student Conduct Administration, 3(1), 6–24. Upcraft, M.L., & Schuh, J.H. (1996). Assessment in student affairs: A guide for practitioners. San Francisco, CA: Jossey-Bass.

Copyright of Recreational Sports Journal is the property of Human Kinetics Publishers, Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.