Systems for Assessing the Effectiveness of Management in Protected ...

5 downloads 69 Views 415KB Size Report
Sep 1, 2003 - School of Natural and Rural Systems Management at the University of ..... the field but not applied in the field assessing management ...
Articles

Systems for Assessing the Effectiveness of Management in Protected Areas MARC HOCKINGS

Since the mid-1990s, numerous methodologies have been developed to assess the management effectiveness of protected areas, many tailored to particular regions or habitats. Recognizing the need for a generic approach, the World Commission on Protected Areas (WCPA) developed an evaluation framework allowing specific evaluation methodologies to be designed within a consistent overall approach. Twenty-seven assessment methodologies were analyzed in relation to this framework. Two types of data were identified: quantitative data derived from monitoring and qualitative data derived from scoring by managers and stakeholders. The distinction between methodologies based on data types reflects different approaches to assessing management. Few methodologies assess all the WCPA framework elements. More useful information for adaptive management will come from addressing all six elements. The framework can be used to adapt existing methodologies or to design new, more comprehensive methodologies for evaluation, using quantitative monitoring data, qualitative scoring data, or a combination of both. Keywords: protected area, evaluation, monitoring, management effectiveness, IUCN

P

rotected areas are the cornerstone of most conservation strategies. They protect biodiversity, safeguard ecosystem health, and provide an array of ecosystem services, such as fresh drinking water, places in which to relax, storehouses of genetic material, and reservoirs of wild plants and animals that can contribute to species populations in surrounding areas. Protected areas also house human communities, providing livelihoods and sustenance. However, investing in the selection, designation, and management of protected areas only makes sense if there is a reasonable chance that these areas can continue to provide these services in the future. Indeed, this assumption underpins the entire philosophy of protected area management. To maximize the potential of protected areas, managers and policymakers need information on the strengths and weaknesses in their management and on the threats and stresses that they face. As understanding of the extent of threats to protected areas has grown, focus on the issue of management effectiveness has heightened, and there is increasing pressure on those responsible for protected areas to monitor their effectiveness. The reasons for assessing management effectiveness include the managers’ desire to adapt and improve their management strategies, the need to improve planning and priority setting, and the increasing demands for reporting and accountability being placed on managers both nationally and internationally (Hockings et al. 2000). Concerns about threats to protected areas and about how to manage these areas effectively have existed since the first

reserves were declared. In Yellowstone National Park, the first national park in the United States, concern about a single, parttime superintendent’s ability to manage Yellowstone adequately led to the US Cavalry taking control in the early years of the park (Wright and Mattson 1996). Despite the spectacular growth in protected areas over the past half-century (WCMC and WCPA 1997) and the importance ascribed to protected areas as mechanisms for in situ conservation in international strategies and conventions (Glowka et al. 1994), the problems faced by the world’s protected areas remain a major concern. These problems can be grouped into three broad categories: (1) threats acting on the natural and cultural resources of the protected area; (2) inadequate resourcing for management; and (3) institutional and capacity problems, including inappropriate policies, poorly functioning management systems or processes, and inadequately trained staff. Systematic attempts to analyze the threats facing protected areas date back to the early 1980s. In 1984, the IUCN (The World Conservation Union) Commission on National Parks and Protected Areas (CNPPA) prepared a list of threatened protected areas of the world (IUCN CNPPA 1984). The commission acknowledged that the list was incomplete but Marc Hockings (e-mail: [email protected]) is a senior lecturer in the School of Natural and Rural Systems Management at the University of Queensland (Gatton Campus), Queensland 4343, Australia, and vice chair of the IUCN (World Conservation Union) World Commission on Protected Areas. © 2003 American Institute of Biological Sciences.

September 2003 / Vol. 53 No. 9 • BioScience 823

824 BioScience • September 2003 / Vol. 53 No. 9





Impacts of tourism

Impacts of adjacent land use









Inadequate knowledge base

Inadequate protected area design

Isolation

Natural impacts (e.g., drought)

Habitat destruction by wildlife

X

X

X



X

X



X

X





X

X

X

X

X

X



X

X

X

SSA







X



X



X





X

X

X

X

X

X

X

X

X

X

X

N Africa/ ME







X





X

X

X

X

X

X

X

X



X

X

X



X

X

EUR



















X





X





X

X

X

X



X

N Eurasia





X





























X

X

X



EA















X



X



X

X

X

X



X



X

X



S/SE Asia

















X



















X

X



AUS













X







X











X





X

X

ANT/ NZ

Region









X



X





X

X

X

X

X

X

X

X

X

X

X

X

PAC







X

X



X



X

X

X





X

X





X

X

X

X

NA





X



X

X



X

X

X

X

X

X

X

X

X

X

X

X

X

X

CA





























X











X

CAR







X



X

X

X

X

X

X

X

X

X

X

X

X

X

X



X

SA

7

7

21

29

29

29

36

43

43

50

50

50

57

57

57

57

65

65

71

79

79

Percentage of regions reporting threat

ANT/NZ, Antarctica and New Zealand; AUS, Australia; CA, Central America; CAR, Caribbean; EA, East Asia; EUR, Europe; MAR, marine; NA, North America; N Africa/ME, North Africa and Middle East; N Eurasia, northern Eurasia; PAC, Pacific; SA, South America; S/SE Asia, South and Southeast Asia; SSA, sub-Saharan Africa. Source: Data extracted from McNeely and colleagues (1994) and categorized and analyzed by the author.





Inappropriate use of fire





Inappropriate development

Invasive species



Absence of political will





Poaching or illegal harvesting





Encroachment

Armed conflict or drug cultivation

X

Overharvesting (e.g., timber, wildlife, water)

Mining

X



Inadequate legislation, policy, or administrative arrangements



Inadequate funds

Pollution

X

X

Inadequate staffing and training

MAR

Habitat destruction or alteration

Threat

Table 1. Reported threats in the 1994 IUCN review of protected areas.

Articles

Articles Table 2. Framework for assessing management effectiveness of protected areas and protected area systems. Element of evaluation

Explanation

Assessed criteria

Focus of evaluation

Context: Where are we now?

Assessment of importance, threats, and policy environment

Significance, threats, vulnerability, national context

Status

Planning: Where do we want to be?

Assessment of protected area design and planning

Protected area legislation and policy, PA system design, reserve design, management planning

Appropriateness

Input: What do we need?

Assessment of resources needed to carry out management

Resourcing of agency, resourcing of site, partners

Resources

Process: How do we go about it?

Assessment of the way management is conducted

Suitability of management processes

Efficiency and appropriateness

Output: What were the results?

Assessment of the implementation of management programs and actions, delivery of products, and services

Results of management actions, services, and products

Effectiveness

Outcome: What did we achieve?

Assessment of the outcomes and the extent to which they achieved objectives

Impacts: effects of management in relation to objectives

Effectiveness and appropriateness

PA, protected area. Source: Hockings and colleagues (2000).

regarded it as indicative of some of the many threats facing protected areas around the world. The list, which was based on submissions from members of CNPPA and from the Conservation Monitoring Centre, documented 113 threats to 43 sites. The most commonly reported threats were inadequate management resources (16 sites), human encroachment (13 sites), change in water regime (12 sites), poaching (10 sites), and adjacent land development (10 sites). In preparation for the 1992 World Congress on National Parks and Protected Areas, CNPPA again used its regional network to prepare a global review of protected areas (McNeely et al. 1994). Data were collected from each region, including information on the status and coverage of protected areas; on management arrangements such as funding, staffing, and institutional structures; and on major threats. Although the report did not include a comparative analysis of threats to protected areas, references to threats from within the text are extracted and summarized in table 1. While the regional information is not precisely comparable because some questions called for subjective responses (what one CNPPA member considered to be a threat worth mentioning may have been considered less important by another member and omitted), some patterns are evident. Three of the five most commonly reported threats involve management and policy deficiencies, including inadequate legislation, poor administrative practices, and shortages of funding and staff, rather than external impacts on the protected areas. In 2000, the World Wide Fund for Nature (WWF, formerly World Wildlife Fund) commissioned a study of threats to protected areas (Carey et al. 2000). The study, which included a review of 26 assessments of threats carried out worldwide, found a correlation between undermanaged and threatened protected areas. It concluded that information on management infrastructure and capacity should indicate the likely status of conservation in the protected area or at least the likely degree of threat.

To maintain the value of protected areas, managers need to monitor the effectiveness of their management actions so that they can identify problems and focus their resources and efforts on addressing these problems. In the absence of systematic information on management effectiveness, managers are flying blind—responding only to the clearly visible threats and hoping that standard management approaches and techniques will deliver the outcomes that they seek. To be able to improve protected area management, managers need to better understand the threats facing their protected areas, the impact of those threats on the values of the protected areas, and the effectiveness of their management strategies in preventing and mitigating the threats.

Assessment of management effectiveness Recognizing the extent of threats facing protected areas and the inability of management agencies to counter these threats, several organizations around the world have developed methodologies for systematically assessing protected area management effectiveness. The term management effectiveness includes three main components of assessment: (1) design issues relating to both individual sites and protected area systems, (2) appropriateness of management systems and processes, and (3) delivery of protected area objectives (Hockings et al. 2000). In 1997, CNPPA (renamed the World Commission on Protected Areas [WCPA] in 2000) established a Management Effectiveness Task Force to focus attention on the emerging issue of management effectiveness and to look at options for assessment. Rather than develop a single, global system, the task force concentrated on developing a framework, both to provide overall guidance in developing assessment systems and to encourage basic standards for assessment and reporting. The framework, which is part of the Best Practice Protected Area Guidelines Series (Hockings et al. 2000), provides a structure and process for designing management September 2003 / Vol. 53 No. 9 • BioScience 825

Articles effectiveness evaluation systems, provides a checklist of issues that need to be assessed, and suggests some possible indicators. The central tenet of the WCPA framework is that management follows a process with six distinct stages or elements. It begins with establishing the context of existing values and threats, progresses through planning and allocation of resources (inputs), and, as a result of management actions (process), eventually produces goods and services (outputs) that result in impacts or outcomes. The framework suggests that systems for assessing management effectiveness should incorporate components covering each of these six elements (see table 2), as they are complementary rather than alternative approaches to assessing management effectiveness. It should be noted, however, that as these components represent a convenient breakdown of elements of management effectiveness, there are likely to be similarities between elements. For example, an assessment of context, although it is not an analysis of management, provides the information that helps put management decisions into context and allows managers to set priorities based on the biological, cultural, and political information gathered. As such, the context assessment can provide the information necessary to develop a management vision, which can be monitored and assessed through the development of management objectives. The combined assessment of the remaining five elements gives a detailed picture of management, providing information that can improve the conservation and management effectiveness of individual protected areas or protected area systems.

A review of management effectiveness systems In addition to WCPA, other organizations that have been prominent in addressing the management effectiveness issue include WWF, The Nature Conservancy, and The World Bank. A number of methodologies (e.g., the WWF/CATIE [Centro Agronómico Tropical de Investigación y Enseñanza] assessment system [Cifuentes et al. 2000], The Nature Conservancy’s measures of conservation success [TNC 2000]) were developed in parallel with but separately from the WCPA framework between 1994 and 2000. More recently, a number of methodologies for the evaluation of management effectiveness have been developed using the WCPA framework (e.g., WWF Rapid Assessment and Prioritization of Protected Area Management methodology [Ervin 2002], IUCN/WWF Forest Innovations Project [Hakizumwami 2000]). A review of 27 systems (listed in table 3) was carried out, documenting the basis of each methodology and examining how the different methodologies relate to the WCPA framework. (For a brief description of each of these systems, see also Hockings 2000.) The systems reviewed included methodologies that had been applied in the field as well as a number that had been proposed but not yet field-tested. The objective of the review was neither to critically analyze the adequacy and appropriateness of each system nor to review the nature and utility of the results of each system. Such a review would not have been possible, in most instances, 826 BioScience • September 2003 / Vol. 53 No. 9

because the only documentation available was a description of the methodology, without any results from pilot studies or applications. Even where results from applications of a methodology were available, it would have been difficult to assess the usefulness of results without contact and discussion with relevant protected area managers and other stakeholders from the sites, and this was not possible within the scope of the study. Instead, the review examined the approach each system took to assessment and the data collection methods used. The WCPA framework provided the basis for analyzing which of the framework elements were addressed by each system. In addition, the nature of the data collected by each of the systems was considered. Many of the methodologies relied, in whole or in part, on ratings of various aspects of park management provided by managers and other stakeholders to determine management effectiveness. Such scoring is usually based on the often subjective perceptions of the person allocating the rating. Although there may be considerable guidance on how various scores should be allocated, the knowledge base on which respondents allocate their scores may vary considerably. To supplement simple scores, some assessment systems also ask respondents to provide additional information that helps to explain or qualify each score. In this analysis, all systems relying on qualitative, perception-based data are described as scoring. In contrast, a number of methodologies incorporated monitoring programs to provide quantitative information for assessment of management effectiveness. The information gathered, referred to as monitoring data, was not derived from the perceptions or opinions of managers, stakeholders, or other participants but from the measurement of some aspect of management activity or of the resource and activities being managed. While the natures of qualitative and quantitative data differ, one is not necessarily superior to the other in terms of accuracy or meaning. Both data types are subject to error during collection, and both require interpretation by the researcher. Interpretation, in this context, involves ascribing meaning or significance to any detected change in a parameter, whether it is assessed qualitatively or quantitatively. Each of the methodologies was analyzed to determine which elements of the WCPA framework were considered and whether the data collected on these elements were derived from monitoring or scoring (see table 4). These data were analyzed using multivariate methods. Cluster analysis was carried out using the PATN (pattern analysis) program (Belbin 1993) to define groups using a flexible UPGMA (unweighted pair group method with arithmetic mean) fusion strategy based on Czekanowski’s asymmetric measure of association. The Czekanowski is an asymmetric association measure, which is appropriate for data from the analysis of methodologies because it ignores null–null correlations where methodologies both lack an attribute but takes account of positive–positive or positive–null matches (Belbin 1993). The cluster analysis (figure 1) suggests the existence of two

Articles Fusion type: Flexible UPGMA (unweighted pair group method with arithmetic mean)

Beta = –0.10

number of methodologies have identical attributes (e.g., methodologies M7, M14, M17, and M20) and hence plot at the same point on the graph. The ordination has also separated the methodologies according to whether they are based primarily on monitoring data (methodologies M7, M8, M10, M11, M14, M15, M16, M17, M20, and M23) or on scoring. The minimum spanning tree, which is the minimum distance required to connect all data points, shows the nearest neighbors of any methodology (Belbin 1993). The ordination and minimum spanning tree confirm that the two groups identified in the cluster analysis are appropriate. The group based on scoring data (group 2) is a more diverse grouping, indicated by the more complex linkages in the minimum spanning tree.

Discussion

Figure 1. Cluster analysis of methodologies showing classification into two distinctive groups. Group 1 consists of methodologies that are based primarily on monitoring of framework elements (quantitative data). Group 2 consists of methodologies that are primarily based on scoring (qualitative) data. groups of methodologies. Group 1 consists of methodologies that are primarily based on monitoring of framework elements. Group 2 consists of methodologies that are predominantly based on scoring data. Only two methodologies contain both monitoring and scoring data. One of these (M11) uses primarily monitoring data, while the other (M22) has an equal balance of the two types of data. The appropriateness of the defined groups can be examined by ordinating the data and examining the position of group members in component space (WardellJohnson and Williams 1996). The ordination was carried out using a Windows version of PATN (Belbin 1993), using semistrong hybrid multidimensional scaling (SSH module of PATN) and the same similarity measure used in the cluster analysis. The results of this ordination are plotted in figure 2. The more similar the methodologies are in their attributes in relation to the WCPA framework, the closer together they will be in two-dimensional space. A

The distinction between methodologies based on monitoring data and those based on scoring data is of particular interest because it reflects very different approaches to assessing management effectiveness, raising the question of what constitutes “truth” in relation to evaluation information. Patton (1990) describes this distinction:“One of the paradigm-related beliefs that affect how people react to qualitative data is how people think about the idea of truth.... The idea that there is a singular material reality, and therefore, propositions are ultimately true or false, is associated with logical positivism. The idea that what is true depends

Figure 2. Ordination of attributes of assessment methodologies. The closer two methodologies (represented by numbers M1 to M27) are, the more similar they are in relation to the World Commission on Protected Areas framework. Methodologies in group 1 are primarily based on monitoring (quantitative) data; those in group 2 are primarily based on scoring (qualitative) data. September 2003 / Vol. 53 No. 9 • BioScience 827

Articles Table 3. Methodologies for assessing management effectiveness of protected areas. Methodologies applied in the field

Methodologies proposed or developed but not applied in the field

M2: IUCN/UNEP—Review of Protected Area M1: Bali Parks Congress Proposal Systems [1986] (MacKinnon and MacKinnon [1982] (Thorsell 1982) 1986a, 1986b) M6: Caracas Parks Congress Proposal [1991] M3: WWF Canada—Endangered Spaces (Foster 1991) Progress Report [1988] (WWF Canada M14: WTMA—Regional Monitoring Program 1998) for the Wet Tropics of Queensland World M4: IIPA—Status Report on Management of Heritage Area [1995] (Chrome 1995) National Parks and Sanctuaries in India M17: ETCNC—Monitoring the Condition and (Kothari et al. 1989) Biodiversity Status of European Conservation M5: The Nature Conservancy—Parks in Peril Sites [1997] (Shaw and Wind 1997) Scorecard (TNC 1999) M8: JNCC, United Kingdom—Common Standards for Monitoring Sites of Special Scientific Interest (Rowell 1993) M9: WWF/CATIE—Management Effectiveness Assessment Methodology [1993] (Cifuentes et al. 2000) M10: Tasmanian Wilderness World Heritage Area Management Evaluation [1994] (Hocking 1994, Jones 2000) M11: Fraser Island World Heritage Area Monitoring and Evaluation Program [1994] (Hockings 1998, Hockings and Hobson 2000) M15: Countryside Commission—Park Information Management System [1996] (Briggs et al. 1996) M18: The Nature Conservancy—PROARCA CAPAS Monitoring Strategy for Protected Areas in Central America [1999] (Courrau 1999) M19: WWF/MINEF Evaluation of Protected Area Management in Cameroon (Culverwell 1997) M20: Jenolan Caves Social and Environmental Monitoring Program (MRC 1995) M21: WWF Brazil—Protected Areas or Endangered Spaces? [1998] (Ferreira et al. 1999) M22: The Nature Conservancy—Measures of Conservation Success [1998] (TNC 2000) M25: ACIUCN—Review of Management of Great Barrier Reef World Heritage Area [1999] (ACIUCN 1999) M26: IUCN/WWF Forest Innovations Project— Central African Case Study [1999] (Hakizumwami 2000) M27: WWF International—Rapid Assessment and Prioritization of Protected Area Management [1999–2000] (Ervin 2002)

Other studies and proposals relevant to assessing management effectiveness M7: US Department of the Interior, National Park Service—Long-Term Inventory and Monitoring Program for National Park System Lands (Silsbee and Peterson 1991) M12: QDPI—Sound Practice Indicators for Visitor Site Management on State Forests (QDPI 1994) M13: Corbett—Evaluation of Management Effectiveness of Biosphere Reserves [1994] (Corbett 1994) M16: Manidis Roberts—Tourism Optimisation Management Model M23: Great Barrier Reef Marine Park Authority— State of the Great Barrier Reef World Heritage Area (Wachenfeld et al. 1998) M24: Parks Canada—Panel on Ecological Integrity of Canada’s National Parks (Parks Canada 2000)

ACIUCN, Australian Committee for IUCN; CAPAS, Central American Protected Areas System; CATIE, Centro Agronómico Tropical de Investigación y Enseñanza; ETCNC, European Topic Centre on Nature Conservation; IIPA, Indian Institute of Public Administration; IUCN, The World Conservation Union (formerly the International Union for Conservation of Nature and Natural Resources); JNCC, Joint Nature Conservation Committee; MINEF, Ministere de l’Environnement et des Forêts; PROARCA, Programa Ambiental Regonal para Centroamérica; QDPI, Queensland Department of Primary Industries; UNEP, United Nations Environment Programme; WTMA, Wet Tropics Management Agency; WWF, World Wide Fund for Nature (formerly World Wildlife Fund). Note: The code number (in bold) assigned to each methodology is used to refer to methodologies in later analysis and discussion. Dates when a system was initially developed (where known) are indicated in brackets.

on one’s perspective, and is, therefore, inherently definitional, situational, and internal, is associated with phenomenology” (pp. 482–483). It may appear on the surface that the more quantitative data from monitoring programs are likelier to be closer to the truth than more qualitative information based on respondents’ perceptions. However, the subjective responses of protected 828 BioScience • September 2003 / Vol. 53 No. 9

area managers are likely to be based on years of field-level experience, and these responses may better capture the realities and complexities of the protected area than many monitoring programs. Weiss and Bucuvalas (1980) have argued that decisionmakers apply both “truth” tests (tests of whether the data are both accurate and believable rather than “true” in an absolute sense) and “utility” tests (tests of whether the data

Articles can help improve management) in judging evaluation information. Patton (1997) has also argued that practical utility is more relevant than absolute truth in assessing evaluation results. In this sense, scoring methodologies may reveal useful insights into management issues and management effectiveness, even though they are based more on perception than on concrete data. Because data collection through monitoring can require significant investment of staff resources and funds, collection of these data requires a long-term commitment to the assessment program. Not surprisingly, the 11 methodologies that include monitoring data are all designed as long-term assessment programs. They are also all site-based rather than

system-focused approaches. In contrast, a number of methodologies are designed to provide rapid results and are not dependent on an extended period of data collection. These rapid assessment systems (e.g., methodologies M2, M4, M9, M21, M25, M26, and M27) must, by their nature, rely largely on scoring or other qualitative data collection systems for their information. They tend to compare multiple protected areas or examine whole systems rather than focus on individual sites, although there are some approaches, such as M25, that are sitefocused. Only seven of the methodologies reviewed in this study considered more than three of the six framework elements. Apart from the Fraser Island case study (M11) and The Nature

Table 4. Attributes of 27 assessment methodologies in relation to World Commission on Protected Areas framework. Methodology

Code number

Context M S

Planning M S

Input M S

Process M S

Output M S

Outcome M S

Bali Parks Congress Proposal

M1











1



1



1





IUCN/UNEP Review of Protected Area Systems

M2







1



1













WWF Canada—Endangered Spaces Progress Report

M3



















1



1

IIPA—Status Report on Management of National Parks and Sanctuaries in India

M4











1



1



1



1

The Nature Conservancy— Parks in Peril Scorecard

M5











1



1









Caracas Parks Congress Proposal

M6











1



1









US National Park Service— Long-Term Inventory and Monitoring Program for National Park System Lands

M7





















1



JNCC—Common Standards for Monitoring Sites of Special Scientific Interest

M8

















1



1



WWF/CATIE—Management Effectiveness Assessment Methodology

M9



1



1



1



1



1



1

Tasmanian Wilderness World Heritage Area Management Evaluation

M10









1







1



1



Fraser Island World Heritage Area Monitoring and Evaluation Program

M11









1





1

1



1



QDPI—Sound Practice Indicators for Visitor Site Management on State Forests

M12











1



1



1



1

Corbett—Evaluation of Management Effectiveness of Biosphere Reserves

M13



1







1



1









WTMA—Regional Monitoring Program for the Wet Tropics of Queensland World Heritage Area

M14





















1



Countryside Commission— Park Information Management System

M15

















1



1



Manidis Roberts—Tourism Optimisation Management Model

M16

















1



1



(continued)

September 2003 / Vol. 53 No. 9 • BioScience 829

Articles Table 4. (continued) Methodology

Code number

Context M S

Planning M S

Input M S

Process M S

Output M S

Outcome M S

ETCNC—Monitoring the Condition and Biodiversity Status of European Conservation Sites

M17





















1



The Nature Conservancy— PROARCA CAPAS Monitoring Strategy for Protected Areas in Central America

M18











1



1









WWF/MINEF Evaluation of Protected Area Management in Cameroon

M19











1











1

Jenolan Caves Social and Environmental Monitoring Program

M20





















1



WWF Brazil—Protected Areas or Endangered Spaces?

M21











1



1







1

The Nature Conservancy— Measures of Conservation Success

M22

1









1



1





1



Great Barrier Reef Marine Park Authority—State of the Great Barrier Reef World Heritage Area

M23















1



1



Parks Canada—Panel on Ecological Integrity of Canada’s National Parks

M24























1

ACIUCN—Review of Management of Great Barrier Reef World Heritage Area

M25























1

IUCN/WWF Forest Innovations Project—Central African Case Study

M26



1



1



1



1









WWF International—Rapid Assessment and Prioritization of Protected Area Management

M27



1



1



1



1



1



1

Note: M indicates quantitative data derived from monitoring; S indicates qualitative data derived from scoring. Other abbreviations: ACIUCN, Australian Committee for IUCN; CAPAS, Central American Protected Areas System; CATIE, Centro Agronómico Tropical de Investigación y Enseñanza; ETCNC, European Topic Centre on Nature Conservation; IIPA, Indian Institute of Public Administration; IUCN, The World Conservation Union (formerly the International Union for Conservation of Nature and Natural Resources); JNCC, Joint Nature Conservation Committee; MINEF, Ministere de l’Environnement et des Forêts; PROARCA, Programa Ambiental Regonal para Centroamérica; QDPI, Queensland Department of Primary Industries; UNEP, United Nations Environment Programme; WTMA, Wet Tropics Management Agency; WWF, World Wide Fund for Nature (formerly World Wildlife Fund).

Conservancy’s measures of conservation success (M22), all the other methodologies that included four or more framework elements relied entirely on scoring for data collection. Only two of the group 1 (monitoring) methodologies considered framework elements other than outputs and outcomes. In some cases, the documentation for the methodology justified this focus on outputs and outcomes, stating that outcomes or results were the most meaningful measure of management performance (e.g., Chrome 1995, MRC 1995, Alexander and Rowell 1999, Jones 2000). In other cases (e.g., Silsbee and Peterson 1991, Shaw and Wind 1997), this concentration on outcomes probably reflects the assessment objective of focusing on the status of the natural resource rather than on the status of management. 830 BioScience • September 2003 / Vol. 53 No. 9

The methodologies in group 2 (scoring) are more diverse in terms of the range of framework elements they contain. Some, such as M24 and M25, only collect data on outcomes, which may reflect the belief that outcomes are the most meaningful measure of performance. In many cases, it is likely that methodologies only consider a limited range of elements because they were designed with a particular focus in mind (e.g., M5, M6, and M18 only consider internal management processes) and without a conceptual framework that links evaluation to the whole process of management. Of the seven methodologies that considered four or more framework elements, three were developed using the WCPA framework as the design tool (M11, M26, M27) and four were independently derived. The only methodology to use all six

Articles elements (M27) was developed from the WCPA framework. One of the strengths of using the WCPA framework to develop and analyze assessment methodologies is that it directs attention to the range of evaluation information that could be collected and demonstrates how this information can be linked to provide a rich explanatory picture of management effectiveness. The framework could be used to adapt and expand existing methodologies or to design new, more comprehensive systems, using monitoring data, scoring data, or, most likely, a combination of both. As demonstrated by the proliferation of new systems reported here, the management effectiveness of protected areas—a relatively obscure issue 10 years ago—has emerged into a dynamic new subfield of protected area management. Challenges for the future will include promoting widespread acceptance of management effectiveness assessments in protected areas (and of the tools that have been developed to deliver them) and establishing monitoring and evaluation as integral activities within protected area management. The test of the usefulness of this work will come from its ability to deliver improved management on the ground.

Acknowledgments Professor Ciaran O’Faircheallaigh from Griffith University stimulated my interest in questions relating to the meaning of “truth” in evaluation. My colleagues Alan Lisle and Grant Wardell-Johnson, from the University of Queensland, assisted with multivariate analysis. This work was supported, in part, by a grant from the WWF/IUCN Forest Innovations Project.

References cited [ACIUCN] Australian Committee for IUCN. 1999. Great Barrier Reef World Heritage Area: Condition, Management and Threats. Sydney: ACIUCN. Alexander M, Rowell TA. 1999. Recent developments in management planning and monitoring on protected sites in the United Kingdom. Parks 9 (2): 50–55. Belbin L. 1993. PATN Pattern Analysis Package. Lyneham (Australia): CSIRO. Briggs D, Tantram D, Scott P. 1996. Improving Information for Management and Decision Making in National Parks. Northampton (United Kingdom): Nene Centre for Research. Carey C, Dudley N, Stolton S. 2000. Squandering Paradise? The Importance and Vulnerability of the World’s Protected Areas. Gland (Switzerland): World Wide Fund for Nature. Chrome F. 1995. A Regional Monitoring Program for the Wet Tropics of Queensland World Heritage Area. Cairns (Australia): Wet Tropics Management Agency. Cifuentes M, Izurieta A, de Faria H. 2000. Measuring Protected Area Management Effectiveness. Turrialba (Costa Rica): IUCN/WWF (World Wide Fund for Nature) Forest Innovations Project, WWF Centroamerica. Corbett M. 1994. An evaluation of the coverage and management effectiveness of biosphere reserves, Paper presented for IUCN at the International Conference on Biosphere Reserves; March 20–25 1994, Sevilla, Spain. Courrau J. 1999. Strategy for monitoring the management of protected areas in Central America. Programa Ambiental Regional para Centroamérica, Central American Protected Areas System, Comisión Centroamericana de Ambiente y Desarrollo, US Agency for International Development. (7 August 2003; www.iucn.org/themes/wcpa/theme/effect/ publications.htm)

Culverwell J. 1997. Long-Term Recurrent Costs of Protected Area Management in Cameroon: Monitoring of Protected Areas, Donor Assistance and External Financing, Ecological and Management Priorities of Current and Potential Protected Area System.Younde (Cameroon): WWF Cameroon, Ministere de l’Environnement et des Forêts. Ervin J. 2002. WWF Rapid Assessment and Prioritization of Protected Area Management (RAPPAM) Methodology. Gland (Switzerland): World Wide Fund for Nature. Ferreira L, Lemos de Sá R, Buschbacher R, Batmanian G, Bensusan N, Costa K. 1999. WWF Brazil: Protected Areas or Endangered Spaces? WWF Report on the Degree of Implementation and the Vulnerability of Brazilian Federal Conservation Areas. Brazil: World Wide Fund for Nature. Foster J. 1991. An International Review System for Categorising Protected Areas, Their Management Effectiveness and Threats to Them. Gland (Switzerland): IUCN. Glowka L, Synge H, Burhenne-Guilmin F. 1994. A Guide to the Convention on Biological Diversity. Gland (Switzerland): IUCN. Hakizumwami E. 2000. Protected Areas Management Effectiveness Assessment for Central Africa. Gland (Switzerland): IUCN/WWF Forest Innovations Project. Hocking H. 1994. Tasmanian Wilderness World Heritage Area management plan: A framework for monitoring and evaluation. Paper presented to the Tasmanian Parks and Wildlife Service, Department of Primary Industries, Water and Environment, Hobart, Tasmania. Hockings M. 1998. Evaluating management of protected areas: Integrating planning and evaluation. Environmental Management 22: 337–345. ———. 2000. Evaluating Protected Area Management: A Review of Systems for Assessing Management Effectiveness of Protected Areas. Lawes (Australia): University of Queensland, School of Natural and Rural Systems Management. Hockings M, Hobson R. 2000. Fraser Island World Heritage Area Monitoring and Management Effectiveness Project Report. Brisbane (Australia): University of Queensland. Hockings M, Stolton S, Dudley N. 2000. Evaluating Effectiveness: A Framework for Assessing the Management of Protected Areas. Gland (Switzerland): IUCN. [IUCN CNPPA] IUCN Commission on National Parks and Protected Areas. 1984. Threatened Protected Areas of the World. Morges (Switzerland): IUCN. Jones G. 2000. Outcomes-based evaluation of management for protected areas—a methodology for incorporating evaluation into management plans. Pages 349–358 in The Design and Management of Forest Protected Areas: Papers Presented at the Beyond the Trees Conference, 8–11 May 2000, Bangkok, Thailand. Gland (Switzerland): World Wide Fund for Nature. Kothari A, Pande P, Singh S, Variava D. 1989. Management of National Parks and Sanctuaries in India: A Status Report. New Delhi (India): Environmental Studies Division, Indian Institute of Public Administration. MacKinnon J, MacKinnon K. 1986a. Review of the Protected Areas System in the Afrotropical Realm. Gland (Switzerland): IUCN, United Nations Environment Programme. ———. 1986b. Review of the Protected Areas System in the Indo-Malayan Realm. Gland (Switzerland): IUCN, United Nations Environment Programme. McNeely JA, Harrison J, Dingwall P. 1994. Protecting Nature: Regional Reviews of Protected Areas. Gland (Switzerland), Cambridge (United Kingdom): IUCN. [MRC] Manidis Roberts Consultants. 1995. Determining an Environmental and Social Carrying Capacity for Jenolan Caves Reserve. Surrey Hills (Australia): MRC. Parks Canada. 2000. Unimpaired for Future Generations: Protecting Ecological Integrity with Canada’s National Parks. Ottawa (Canada): Parks Canada. Patton MQ. 1990. Qualitative Evaluation and Research Methods. Newbury Park (CA): Sage Publications. ———. 1997. Utilization-Focused Evaluation: The New Century Text. Thousand Oaks (CA): Sage Publications.

September 2003 / Vol. 53 No. 9 • BioScience 831

Articles [QDPI] Queensland Department of Primary Industries. 1994. Sound Practice Indicators for Visitor Site Management on State Forests: Policy and Guidelines. Brisbane (Australia): Forest Service, QDPI. Rowell TA. 1993. Common Standards for Monitoring SSSIs. Peterborough (United Kingdom): Joint Nature Conservation Committee. Shaw P, Wind P. 1997. Monitoring the condition and biodiversity status of European conservation sites. Paper presented to the European Environment Agency on behalf of the European Topic Centre on Nature Conservation, Paris. Silsbee DG, Peterson DL. 1991. Designing and Implementing Comprehensive Long-Term Inventory and Monitoring Programs for National Park System Lands. Washington (DC): US Department of the Interior, National Park Service. Thorsell JW. 1982. Evaluating effective management in protected areas: An application to Arusha National Park, Tanzania. Gland (Switzerland): IUCN Commission on National Parks and Protected Areas. [TNC] The Nature Conservancy. 1999. Measuring Success: The Parks in Peril Consolidation Scorecard Manual. Arlington (VA): TNC, Latin America and Caribbean Region.

832 BioScience • September 2003 / Vol. 53 No. 9

———. 2000. The Five-S Framework for Site Conservation: A Practitioner’s Handbook for Site Conservation Planning and Measuring Conservation Success. Arlington (VA): TNC. Wachenfeld D, Oliver J, Morrissey J, eds. 1998. State of the Great Barrier Reef World Heritage Area 1998. Townsville (Australia): Great Barrier Reef Marine Park Authority. Wardell-Johnson G, Williams M. 1996. A floristic survey of the Tingle Mosaic, south-western Australia: Applications in land use planning and management. Journal of the Royal Society of Western Australia 79: 249–276. [WCMC and WCPA] World Conservation Monitoring Centre and IUCN World Commission on Protected Areas. 1997. United Nations List of Protected Areas. Gland (Switzerland): IUCN. Weiss CH, Bucuvalas M. 1980. Social Science Research and Decision Making. New York: Columbia University Press. Wright RG, Mattson DJ. 1996. The origin and purpose of national parks and protected areas. Pages 3–14 in Wright RG, ed. National Parks and Protected Areas: Their Role in Environmental Protection. Cambridge (MA): Blackwell Science. [WWF Canada] World Wildlife Fund Canada. 1998. Endangered Spaces Progress Report 1997–98. Ontario: WWF Canada. Report no. 8.