Performance Indicators a tool to Support Spatial ... - Semantic Scholar

32 downloads 13797 Views 353KB Size Report
Performance Indicators a tool to Support Spatial Data Infrastructure assessment. Garfield A. .... mended the development of a set of tools and assessment tech-.
ARTICLE IN PRESS Computers, Environment and Urban Systems xxx (2008) xxx–xxx

Contents lists available at ScienceDirect

Computers, Environment and Urban Systems journal homepage: www.elsevier.com/locate/compenvurbsys

Performance Indicators a tool to Support Spatial Data Infrastructure assessment Garfield A. Giff a,*, Joep Crompvoets b a b

OTB Research Institute, Delft University of Technology, Jaffalaan 9, Delft, 2628 BX, The Netherlands Wageningen University, Wageningen, The Netherlands

a r t i c l e

i n f o

Article history: Received 10 August 2007 Received in revised form 11 August 2008 Accepted 11 August 2008 Available online xxxx Keywords: SDI assessment Performance Indicators (PIs) Framework for PIs

a b s t r a c t Funding the reengineering and recapitalisation of Spatial Data Infrastructures (SDIs) will, in part, depend on the ability of SDI component coordinators to comprehend and report on the performance of their initiatives. This is important for both aspects, as for reengineering, the performance of an infrastructure can only be improved if it is measured. For recapitalization, attracting new investment is a function of past, present, and projected performance. Therefore, it is imperative that SDI coordinators develop metrics to evaluate, and report on performance. With this in mind the researchers embark on an initiative to define SDI assessment and recommend tools to assist in measuring SDIs’ performance. The paper presents key findings of the research, which includes an overview of a structured concept of SDI assessment. Within this concept a Multi-View Framework for classifying the different assessment methodologies identified according to the purpose of the SDI assessment is delivered. This is followed by an in-depth analysis of Performance Indicators (PIs) as a possible tool to assist in the measuring and reporting of an SDI’s performance. The paper then presents and critically analyzes a Framework to guide SDI coordinators in the intricate task of designing PIs for their initiatives. Ó 2008 Elsevier Ltd. All rights reserved.

1. Introduction The Spatial Data Infrastructure (SDI) community has long proclaimed the benefits to be gained from the implementation of SDIs throughout the information society. However, to date there is no structured methodology(s) in place to effectively assess these benefits and thus, justify the resources expended on SDIs. In addition, the need for SDI assessment is growing due to changes in government fiscal policies, more market-oriented economies, and the maturity of first generation SDIs. The sum effect is, governments—the main financiers of SDI—are now demanding that methodology(s) be put in place to justify the implementation of SDIs before additional funds can be accessed (Stewart, 2006). Assessing the performance of an SDI to justify its existence— accountability assessment—is a very complex, intriguing and non-linear task. This is because SDIs are by nature complex, with multifaceted components, numerous stakeholders, monopolistic tendencies and therefore, will have complex performances (Lawrence, 1998; Rajabifard, 2002). Therefore, SDI assessment will require methodologies capable of engaging its different integrated features, thus, giving rise to the concept of a Multi-View Assessment Framework. * Corresponding author. Tel.: +31 (0) 15 27 81730; fax: +31 (0) 15 27 82745. E-mail addresses: [email protected] (G.A. Giff), [email protected] (J. Crompvoets).

An accountability assessment of an SDI is achievable by evaluating performance through the relationship amongst inputs, outputs and outcomes (Giff, 2006; Lawrence, 1998; Steudler, 2003, chap. 14). However, the success of this technique is dependent on having in place good quality metrics—Performance Indicators—to provide performance information pertaining to the SDI’s outputs, outcomes and impacts with respect to its inputs and objectives. For Performance Indicators (PIs) to provide precise and accurate performance information they should be designed and implemented within a performance management system. One such management system is the Performance-Based Management (PBM) style. This is a proven procedure that supports the creation of a more favourable environment for infrastructure evaluation and therefore, may be applicable to SDI assessment (GSA, 2000). The paper sets out to improve the awareness of the Geo-Information community on the need for assessing SDIs and the role well designed PIs can play in assessment. This will be achieved through the exploration of the concept of SDI assessment followed by the introduction of the PBM style. The paper then focuses on PIs as a possible tool to assist in SDI accountability assessment. The focus then shifts to the development of a Conceptual Framework to assist in the design of SDI PIs. Case studies on the application of the Framework are then presented and the paper closes with conclusions on the results.

0198-9715/$ - see front matter Ó 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.compenvurbsys.2008.08.001

Please cite this article in press as: Giff, G. A. & Crompvoets, J. Performance Indicators a tool to Support Spatial Data Infrastructure assessment. Computers, Environment and Urban Systems (2008), doi:10.1016/j.compenvurbsys.2008.08.001

ARTICLE IN PRESS 2

G.A. Giff, J. Crompvoets / Computers, Environment and Urban Systems xxx (2008) xxx–xxx

2. The concept of assessment Assessment is a natural activity for human beings as most people are inclined to evaluate an activity carefully before deciding on a course of action. Similarly, often times managers will have to demonstrate that decisions made were rational and objectives were achieved. This type of justification can be attained and demonstrated through an assessment (Georgiadou, Rodriguez-Pabón, & Lance, 2006; Giff, 2006; McNamara, 1999). Authors (e.g., Chelimsky, 1997; Scriven, 1983) have generally grouped assessment into three distinctive categories—Accountability Assessment, Developmental Assessment, and Knowledge Measurement—for the clarification of the concept. These categories which are loosely based on the purpose of the assessment are also valid for SDI assessment, since, in today’s neoliberalistic society there is a strong demand to assess SDIs for the following socio-economic reasons similar to those proposed by Scriven (1983) and Chelimsky (1997): 1. Accountability—The justification and monitoring of SDIs in a systematic manner through the relationships between the results (outputs, outcomes and impacts) and the investments/efforts committed to SDI initiatives. Accountability assessments are used as a guide to recapitalization and reengineering.

2. Development—Usually carried out to measure and recommend changes in the activities and development of an SDI. Facilitates continuous monitoring of SDI implementation processes to ensure effective maintenance. The results are used as a guide for reengineering. 3. Knowledge creation—For the purpose of better understanding the paradigms, mechanisms, forces, rules and socio-political issues involved in the implementation and maintenance of SDIs. Facilitates the development of new theories to improve on functions and implementation strategies.

3. Assessment of SDIs Assessment research has already received considerable attention in the Geo-Information community (Fig. 1). However, it is still in its virgin phase and remains problematic, mainly because of the complex, multifaceted, and dynamic nature of an SDI. The complexity of an SDI refers to the fact that an SDI cannot be understood only in terms of the summation of its components (inclusive of stakeholders). In that, an SDI as a whole produces a value far greater than the value of the summation of its individual components (De Man, 2006; Delgado Fernández & Crompvoets 2007; TaylorPowell, 1999). Therefore, a comprehensive assessment of an SDI

Purpose of The Assessment (for whom, for what, for why?)

Accountability

Development

(Recapitalisation)

(Reengineering)

Knowledge Acquisition (Capacity building)

Recommended Approaches (Multi-view Approach)

View 1

View 2

View 3

a. Performance Indicators a. SDI Readiness

a. Performance Indicators (Giff, 2006)

b. Evaluation Indicators

b. Evaluation Indicators (Steudler, 2003)

b. Clearinghouse Suitability

c. Performance Measures c. INSPIRE State of Play d. Monitoring INSPIRE

c. Performance Measures (Lance, 2007) d. SDI Readiness (Delgado et al, 2005) e. Organisational (Kok and van Loenen, 2005) f. INSPIRE State of Play (SADL, 2005) g. Generational (Rajabifard et al, 2006) h. Clearinghouse Suitability(Crompvoets,2006) i. Monitoring INSPIRE (Vandenbroucke, 2007)

Accountability Knowledge

Development Knowledge

General Knowledge

Policy Makers/ Managers

Managers/Operational staff

Researchers/Scientists

Fig. 1. Multi-View Framework for SDI assessment. (See above-mentioned references for further information (Crompvoets, 2006; Delgado et al., 2005; Giff, 2006; Kok and Van Loenen, 2005; Lance, 2007; Rajabifard et al., 2006; SADL, 2005; Steudler, 2003; Vandenbroucke, 2007)).

Please cite this article in press as: Giff, G. A. & Crompvoets, J. Performance Indicators a tool to Support Spatial Data Infrastructure assessment. Computers, Environment and Urban Systems (2008), doi:10.1016/j.compenvurbsys.2008.08.001

ARTICLE IN PRESS G.A. Giff, J. Crompvoets / Computers, Environment and Urban Systems xxx (2008) xxx–xxx

goes beyond measuring the performance of its individual components but to a higher level of measuring the results of the integration of the components. The multifaceted nature points to the fact that SDIs can be viewed from different perspectives in order to understand the holistic concept. This implies SDIs can serve multiple purposes and thus, can be ascribed many different definitions based on the different purposes. The nature of an SDI is also very dynamic due to the changes—evolving and diverse demands, evolution in technology, and socio-political adjustments—occurring in the implementation environments (Crompvoets, Bregt, Rajabifard, & Williamson, 2004). Since SDIs are complex, and constantly evolving, an assessment of an SDI should be capable of dealing with these characteristics (Grus, Bregt, & Crompvoets, 2006). This result was the main outcome of the ‘Multi-View Framework to assess (National) Spatial Data Infrastructures Workshop’ organised by Wageningen University, The Netherlands (January 2006) (see Crompvoets & Grus, 2007, for details). Based on this conclusion the workshop recommended the development of a set of tools and assessment techniques capable of capturing the complex, multifaceted and dynamic nature of an SDI during the assessment process. This recommendation is supported by the work of Rogers (2005) and De Man (2006) that also recommend the use of multiple assessment approaches when evaluating complex programs. In general, this technique promotes and assures more complete, less biased and holistic assessment results. Therefore, for an SDI assessment framework to be effective it should be multi-approachable and capable of serving multi-purposes: a Multi-View Framework. A Multi-View Framework should be constructed in such a manner that it facilitates the selection of the best approach based on the purpose (who and what) of the assessment and the objectives of the SDI. Work to date by the joint research group of the universities of Wageningen, Delft, and Melbourne has resulted in the development of a Multi-View Framework that includes nine different approaches (Fig. 1). Fig. 1 illustrates the suitability of the nine assessment methodologies reviewed—at the Wageningen Workshop—to the three categories of assessment recommended by researchers. The figure also goes one step further by identifying key aspects of the methodologies that are more applicable to particular categories of assessment. From the figure—moving from right to left—the application of the approaches becomes more specific producing more detailed (quantifiable) results. As the applications become more specific, selected components of the methodologies (e.g., application of PIs from the PBM methodology to accountability assessment) are required to produce the results. 3.1. SDI assessment for accountability A consequent of the maturity of first generation SDIs is the fact that they will require or soon will require reengineering and recapitalisation to transform them into SDIs capable of providing the services demanded by current and future users (Giff, 2006). Although some first generation SDIs may be achieving their current goals and objectives transformation is still a necessity to improve their efficiency and effectiveness (see Masser, 1998,1999, 2005 for first generation SDIs concept). That is, these SDIs may require for example, technological upgrades, more effective policies, and more current GI to better support their objectives (e.g., current e-government initiatives, informed decision-making, sustainable development, more flexible land administration systems, national security and GI commerce). Therefore, SDIs must be assessed within these objectives and performance information on these objectives reported to the financiers.

3

The terms ‘‘recapitalisation and reengineering” in this paper refer to the additional funds necessary to finance improvement and maintenance of an SDI, and the upgrading of the components, processes and functions of an SDI, respectively. Reengineering and thus, recapitalisation of first generation SDIs in today’s stringent economic and political climate will require that these infrastructures be assessed in terms of accountability (i.e., efficiency and effectiveness) (Giff, 2006). An efficiency assessment refers to the evaluation of SDIs to determine if they are achieving their objectives in the most economical manner. On the other hand, effectiveness refers to the evaluation of SDIs to determine if they are achieving their goals, along with, having the predicted impact on society. Information from an accountability assessment will provide stakeholders with a schema of the different aspects of their SDIs that may require redesigning. The challenge however, is for SDI coordinators to develop assessment techniques capable of providing and adequately reporting performance results to financiers, stakeholders and the public where applicable. One technique for achieving this feat is to implement an SDI within the context of a Performance-Based Management style that possesses tools capable of assisting in an accountability assessments of different aspects of an SDI, as well as, the SDI as a ‘whole’.

4. The application of Performance-Based Management to SDI assessment Performance-Based Management (PBM) is one technique that facilitates the operation of an infrastructure in such a manner that its strengths and weaknesses are constantly identified, analyzed and managed (GSA, 2000). PBM SIG (2001a) defines PBM as: ‘‘. . .a systematic approach to performance improvement through an ongoing process of establishing strategic performance objectives; measuring performance; collecting, analyzing, reviewing, and reporting performance data; and using that data to drive performance improvement.” The definition indicates that there are at least six predominant processes involved in PBM which facilitates assessment in a systematic and iterative manner (Fig. 2) (Environment Canada, 2000). The information gained from each process is used to constantly improve the quality of the program, as well as, justifying continuous investment. In Fig. 2, the first process involves the definition of mission, goals, and objectives in measurable terms. In the second process an analysis of the infrastructure is carried out to identify and highlight the areas that are most crucial to understanding and measuring its success. This is necessary since it may not be possible or practical to measure all aspects. The third process—key aspect of the application of PBM to accountability assessment—will be the main concern of this paper. Here, what is to be measured and how it is to be measured is decided. A significant outcome of this process is the development of metrics—Performance Indicators— to provide performance information to assist in the determination of the success or failure of the project. The fourth process involves the development of policies and systems to efficiently and effectively collect performance information. In process five the performance information collected is analyzed, reviewed, and communicated to the decision-makers. Process six, the final process, is the application of the performance information to the improvement of the infrastructure. For more in-depth reading on PBM and the benefits it offers to SDI assessment (see Giff, 2006; Hale, 2003; NPR, 1997; PBM SIG, 2001a).

Please cite this article in press as: Giff, G. A. & Crompvoets, J. Performance Indicators a tool to Support Spatial Data Infrastructure assessment. Computers, Environment and Urban Systems (2008), doi:10.1016/j.compenvurbsys.2008.08.001

ARTICLE IN PRESS 4

G.A. Giff, J. Crompvoets / Computers, Environment and Urban Systems xxx (2008) xxx–xxx

Process 1

Process 2

Process 3

Defining organisation’s mission, goals, and objectives.

Identifying key performance areas (i.e., areas critical to the success of the assessment).

Developing an integrated performance measuring system.

Process 6

Process 5

Process 4

Applying performance information to decisionmaking and system improvement.

Analysing, reviewing and communicating performance.

Developing data collection system(s) and methodology(s).

Fig. 2. Six key processes of PBM style (adopted from NPR, 1997 and PBM SIG, 2001a).

5. Performance Indicators A key aspect of PBM is the use of indicators as a yardstick to measure performance. Indicators used in an assessment exercise of this nature are normally called Performance Indicators (PIs) and may be defined as: ‘‘. . .the measurement of a piece of important and useful information about the performance of a program expressed as apercentage, index, rate or other comparison which is monitored at regular intervals and is compared to one or more criterion.” (OPM, 1990) Based on the above definition a PI is a metric that measures the degree to which key functions (objectives) of an organisation are being achieved. PIs are usually developed with respect to an organisation’s targets (i.e., goals/objectives) and possess the general characteristic of any information along with their own unique characteristics that enables them to effectively communicate performance (Long, 1989). Two characteristics that makes them suitable for assessing infrastructure—specifically outputs, outcomes and impacts—are their quantitative and qualitative features (Environment Canada, 2000; WHO, 2000). A quantitative PI comprises a numeric value that provides magnitude (how much), and a unit of measure that gives the numeric value meaning (TRADE, 1995). In addition, a quantitative PI can be a single dimensional unit (e.g., meters or dollars) or it can be a multidimensional unit (e.g., a ratio). Single dimensional PIs are usually used to compare or track very basic functions of an organisation; while, for more complex components, multidimensional PIs are used. For example, the unit applied to PIs measuring cost savings in environmental management due to greater accessibility of Geo-Information could be dollars/euro/sterling per map files downloaded. Qualitative PIs are usually used to measure the socio-political outcomes or impact of a program (e.g., users’ satisfaction with land administration information facilitated by the SDI). However, although the outcomes or impact of these programs are usually qualitative, quantitative information is required by funding agencies in order to ensure that cognitive decisions are made regarding investment in public goods (CMIIP, 1995; WHO, 2000). This is because quantitative PIs support comparative analysis, thus, the need for quantifying qualitative PIs (CMIIP, 1995; Lawrence, 1998). This paradigm is important to SDI assessment as outcomes and impacts of SDIs are generally qualitative in nature. The quantification of qualitative PIs is a complex task, which is very dependent on the process or processes to be measured. Generic transformation is

usually to employ some form of range or ranking scale which is not always suitable for infrastructure evaluation and therefore, more quantifiable economically sound methodologies need to be developed (Deloitte & Touche, 2002; NCIAP, 2004; WHO, 2000). In addition to being quantifiable, good PIs should have the following characteristics (CHN, 2001; PSMO, 1997; WHO, 2000):  Specific—Clearly define and easily understood.  Measurable—Should be quantifiable in order to facilitate comparison with other data;  Attainable/feasible—Practical, achievable, and cost-effective to implement.  Relevant–True representation of the functions they intend to measure.  Timely and free of bias—Information collected should be available within a reasonable time-frame, impartially gathered, and impartially reported.  Verifiable and statistically valid—Should be scientifically sound with possibilities to check the accuracies.  Unambiguous—A change in an indicator should result in clear and unambiguous interpretation. Example, it should be clear whether an increase in the value of a PI represent an improvement or reduction.  Comparable—Information should show changes in process over time. In general, PIs with SMART characteristics that are designed to measure key processes or functions within an organisation are classified as Key Performance Indicators (KPIs) (OAGA, 1999; PBM SIG, 2001b). KPIs are those PIs that are used to measure the critical success factors of an organisation (Burby, 2005; PSMO, 1997; Reh, 2005). That is, KPIs are likely to provide more detail performance information about key functions/processes than general PIs (PROMATIS, 2006). Although PIs may have their drawbacks when it comes to measuring the qualitative aspect of an SDI their other useful qualities do make them applicable to SDI assessment. However, for PIs (from here on the term PIs refers to both general PIs and KPIs) to have significant impact on SDI assessment they ultimately must be designed taking into account the complex and dynamic nature of an SDI and not just implanted from other domains. This points to the need for a guide to assist the SDI community in the development of SDI PIs. This guide, possible in the form of a framework for the development of SDI PIs should include the variables that contribute to the complexity of an SDI and also, fit within process 3 of PBM.

Please cite this article in press as: Giff, G. A. & Crompvoets, J. Performance Indicators a tool to Support Spatial Data Infrastructure assessment. Computers, Environment and Urban Systems (2008), doi:10.1016/j.compenvurbsys.2008.08.001

ARTICLE IN PRESS G.A. Giff, J. Crompvoets / Computers, Environment and Urban Systems xxx (2008) xxx–xxx

5.1. Developing PIs for SDI assessment

5

handling performance complexity due to SDI hierarchy and objectives.

Increasingly, the financiers of SDIs are demanding that PIs are included in the business plan of an SDI. This is evident from their inclusion in the business plans used by SDI coordinating bodies in the leveraging of funds for the reengineering of SDIs (see Table 1). Table 1 provides an overview of SDI organisations that have published their activities on PI development. Table 1, although not a complete listing, serves the function of providing the readers with a snapshot of PI related activities within the SDI community. The table indicates the recognition of not only the need for SDI assessment but also the importance of PIs in the assessment process. The authors’ research indicates that although these efforts are giant steps in the right direction, the results of these efforts are not comprehensive enough to have a significant impact on SDI assessment. This is because SDI PIs developed to date are not sufficiently quantifiable nor are they detailed enough to be classified KPIs. This conclusion is supported by NRC (2007) who also concludes that current metrics for evaluating GI programs do not meet the needs for assessing the societal benefits generated by these programs. In general, the PIs used in the SDI community (e.g., product sale figures, number of hits on the website, number of searches, and the number of requests for GI) are no longer providing meaningful information on performance as the initiatives evolves (Giff & Lunn, 2008). That is, the current sets of PIs identified by the research are not capable of capturing the network effects (e.g., value-added products and services) of GI initiatives or the feedback of their stakeholders. See Giff and Lunn (2008) for more detailed information on the weaknesses of current SDI PIs. Capturing the network effects of an SDI is an important but difficult task as these benefits tend to be qualitative and thus will require quantification. This is important since, any meaningful assessment of an SDI will require the usage of quantitative KPIs; as information provided by quantitative KPIs will better ensure that more cognitive decisions on the recapitalisation and reengineering of SDIs are made. The type of information produced by KPIs facilitates the removal of subjective decision-making from SDI implementation and maintenance. Again, KPIs can only be robust if the methodologies used to transform them from qualitative to quantitative units are scientifically and technically sound. Research results indicate a deficiency in sound and efficient methodologies to assist the development of SDI PIs. That is, the results point to the need for a framework to assist in the design of PIs capable of capturing variables that contributes to the complexity of SDI performances. In addition, this framework should be capable of

6. Towards a Conceptual Framework for developing PIs for SDIs PI design calls for the usance of methodologies that involves clearly designed logical steps viewed as series of logic flow models tailored to capture key functions and activities, external assessment factors, and the purpose of the assessment (GSA, 2000). That been said, a conceptual model (i.e., a framework) can be created that includes universal concepts, principles and activities that are generally used in the design of PIs (GSA, 2000; Kanungo, Duda, & Srinivas, 1999). This ‘‘Framework” would be in part high-level and would require fine-tuning by individual organisations before actual execution. The above theory implies that the traditional methodologies for developing SMART PIs can be adapted to a framework for developing SDI PIs. The authors explored this hypothesis and concluded that using analogies with other infrastructures and organisations producing public goods, a Conceptual Framework for the development of PIs for SDI assessment (here after referred to as The Framework) may be formulated. The main function of this Framework is to act as a guide to the SDI community in the development of PIs specifically to assist in SDI accountability assessment. 6.1. Logic model Social scientists claim that social programs are based on a ‘‘theory of change” that serves the purpose of connecting the program’s activities with the program’s goal (Bickman, 1987; Innovation Network, 2005; Patton, 1989). A logic model is a graphic representation of the theory of change, in that, it illustrates how the inputs and activities connect to the results (Coffman, 1999; Taylor-Powell, 1996). It is a visual schema that seeks to convey explicitly the assumed relationships (activities and interactions) amongst inputs, outputs, outcomes and impacts (Schmitz & Parsons, 1999). It conveys these relationships through the usage of boxes, connecting lines, arrows (double directional is some cases), feedback loops and other visual metaphors (Schmitz & Parsons, 1999). In addition to visually expressing the presumed effects of a program, the logic model also serves as an iterative tool for providing a framework to support program planning, implementation and assessment (CDCP, 1999). Once a logic model is completed PIs for the critical success areas of the program identified by the logic models can be developed (Coffman, 1999). Application of the logic model concept to SDI

Table 1 A snapshot of the performance-based activities initiated by the SDI community Organisation

Performance-Based Management activities

GeoConnections (Canada)

Developed a logic model and PIs to measure performance. Part of their business plan used to leverage funds from the Federal Government Adopted the Federal Enterprise Architecture ‘Performance Reference Model’ to measure performance. A requirement to continue leveraging Federal funding for the next phase of the NSDI In their 2004–2005 report ANZLIC listed twelve outcomes of the Australian SDI

Federal Geographic Data Committee (USA) Australia New Zealand Land Information Council (Australia and New Zealand) Public Sector Mapping Agencies (Australia) Ruimte voor Geo-Informatie (Netherlands) GEOIDE (Canada) European Commission Joint Research Centre Meso-American and Caribbean Geo-Spatial Alliance (MACGA) Global Spatial Data Infrastructure (GSDI)

Included PIs in their Strategic Plan 2002–2006 and also in their reports to their Board of Directors (Paull, 2004) Invested in research on the ‘‘Development of a Framework to measure the Performance of NSDIs” Sponsors research into SDI economic issues, as well, organised workshops on the evaluation of SDIs (2 in 2006) Organised a number of educational activities on the economic issues of an SDI. The latest being a workshop on measuring the return on investment of an SDI MACGA with the support of the Centre of Property Studies University of New Brunswick organised workshops throughout the Caribbean that introduced the concept of PIs for both SDI and GIS assessment Organised a number of conferences and workshops that addressed the SDI economic issues

Please cite this article in press as: Giff, G. A. & Crompvoets, J. Performance Indicators a tool to Support Spatial Data Infrastructure assessment. Computers, Environment and Urban Systems (2008), doi:10.1016/j.compenvurbsys.2008.08.001

ARTICLE IN PRESS 6

G.A. Giff, J. Crompvoets / Computers, Environment and Urban Systems xxx (2008) xxx–xxx

PIs1

Inputs

PIs2

PIs3

Project’s objectives, goals, mission, etc.

Effectiveness

Effectiveness

Efficiency

Component of an SDI

Outputs

Component activities

Impacts

Outcomes

Interaction with users

Interaction with society

Fig. 3. Conceptual performance logic flow of an SDI (adopted from GSA (2000)).

assessment by the authors resulted in the identification of three categories of PIs to assist in SDI assessment (Fig. 3). Fig. 3, a conceptual logic model illustrates the relationships amongst an SDI’s inputs, outputs, outcomes, and impact within the context of efficiency and effectiveness. The broad arrows map the relationships between aspects of an SDI’s components with the different styles—straight lines to dash lines—indicating the growing fuzziness in defining the relationships. The efficiency relationship illustrated by the logic model (Fig. 3) gives rise to a set of PIs—referred to as PIs1—to measure this level of performance. Similarly, the two sets of effectiveness relationships (output vs. outcomes and outcomes vs. impacts) promote the need for two more categories of PIs referred to as PIs2 (output vs. outcomes) and PIs3 (outcomes vs. impacts), respectively. Based on the analysis of logic models produced for SDI assessment it can be concluded that The Framework should be capable of addressing these three categories of PIs. 6.2. Framework outline For the initial design The Framework focused mainly on supporting the development of PIs for accountability assessment of SDIs. Measuring these two operational qualities of an SDI poses two key problems that must be considered in the design of The Framework. Firstly, outcomes and impacts of SDIs tend to be qualitative; therefore, PIs designed must be capable of representing these values in a statistically useful manner. Secondly, the outcomes and impacts are usually long-term—three to five years minimum—and should be factored into The Framework. The Framework consists of 11 fundamental steps which were designed using analogies from infrastructure projects and other programs facilitating the production of public goods. Inevitably, these steps were however customized in order to capture the features unique to SDI assessment. It should be noted that the steps recommended by the authors are non-linear, iterative or cyclical processes that requires regular revisiting (Fig. 4). Listed below are the 11 steps the authors recommend when designing PIs for an SDI accountability assessment: 1. Based on the objectives, the purpose of the assessment, and the key performance areas identified by the logic model decide on aspect/component of the program to be evaluated. 2. With the aid of logic models identify the main activities/ functions and inputs that are essential to the key performance areas of the program.

3. Clearly define in operational and measurable terms, the expected outputs, outcomes and where possible impacts. 4. Identify and encapsulate factors (internal (e.g., users’ requirements) and external (e.g., value-added products and other externalities)) that are likely to influence the Outputs, Outcomes, and Impacts and therefore, affects the assessment. 5. Based on the purpose of the assessment, select the category of PIs required. 6. Design a set of efficiency indicators (PIs1) based on the expected outputs. Here the aim is to determine whether or not the program is operating optimally. The PIs in this category should be capable of capturing the amount of input units that are involved in the production of a specified output. In terms of an SDI some of the challenges in developing this category of PIs are: defining the monetary inputs, and defining what is to be classified as output. For example, is the clearinghouse the output or is it the datasets facilitated by the clearinghouse? Example of other variables PIs measuring efficiency should attempt to encapsulate are, users’ satisfaction level, and in some sense the effects of the monopolistic nature of an SDI. 7. Select KPIs from the list of efficiency PIs developed in the previous step. Use the logic model(s) and the SMART concept to assist in the selection. That is, analyse the PIs to determine if they are providing information pertaining to critical success areas. 8. Design a set of effectiveness indicators (PIs2 and PIs3). For an SDI, it is expected that the PIs in this category will be more qualitative than quantitative. An example of a quantitative PI for outcome is the percentage of users who found the datasets they were looking for via the clearinghouse. While a qualitative PI for outcome could be the level of satisfaction a user derives from the metadata provided by a data supplier. The development of PIs in this category will require extensive investigation into the medium to long-term effects of an SDI on the society and the inclusion of a number of external variables. 9. Select KPIs from the list of PIs developed in the previous step (see also step 7). 10. Analyse the KPIs to determine for example, if they can pass the SMART test, are cost-effective to implement, data are readily available for these PIs, personnel available to collect and analyse the required performance information, and they are providing performance information on critical success areas.

Please cite this article in press as: Giff, G. A. & Crompvoets, J. Performance Indicators a tool to Support Spatial Data Infrastructure assessment. Computers, Environment and Urban Systems (2008), doi:10.1016/j.compenvurbsys.2008.08.001

ARTICLE IN PRESS G.A. Giff, J. Crompvoets / Computers, Environment and Urban Systems xxx (2008) xxx–xxx

7

1. Select Aspect of Program to be Measured

2. Identify Critical Activities/Functions and Inputs of the Program 3. Assign operational Definition to Outputs, Outcomes, and Impacts

4. Identify Factors influencing Outputs, Outcomes, and Impacts

5. Select Required Category of Indicators

Effectiveness PIs

Efficiency PIs

6. Design Efficiency PIs inputs vs output

8. Design Effectiveness PIs output vs outcome/outcome vs impact

9. Select KPIs

10. Analyse KPIs

7. Select KPIs

11. Final Set of KPIs

Fig. 4. Flow diagram of key processes involved in designing SDI PIs.

11. Combine if necessary the sets of KPIs that are capable of measuring and reporting performance information on the critical success areas (or desired areas) of the SDI.

For detailed information on the case studies (see Giff, 2008a, 2008b; Giff & Lunn, 2008). 7.1. The GeoConnections case study

The above 11 steps—components of the Framework—only serve the purpose as the skeleton for production of SDI PIs. In that, the steps do not spell out the handling of all the variables that affect the design of PIs; in particular those classified as PIs3 and to a lesser extent PIs2. This implies that the corpuscles must be placed on the skeleton during the application of the Framework where the purpose of the assessment, objectives of the SDI and variables specific to the SDI are taken into account. Therefore, the application of the Framework to PI development of a particular SDI will require the inclusion of variables more specific to the SDI in question.

7. Practical application of The Framework To test The Framework, case studies on the assessment of geomatics programs in Canada are evaluated. The first set of case studies investigates geomatics programs at the national and provincial levels, in The GeoConnections Secretariat, GEOIDE Network and Land Information Ontario (state SDI). Whilst, the second set investigates SDI assessment at the local level in the form of City SDIs in Fredericton, Saint John and Moncton in the province of New Brunswick. Also investigated was the SDI component within Service New Brunswick. For conciseness and relevance a synopsis of the GeoConnections’ and the City of Fredericton’s case studies will be presented.

The GeoConnections Secretariat is the Federal Government Organisation responsible for coordinating the implementation of the Canadian Geo-spatial Data Infrastructure. GeoConnections undergoes an assessment for three main reasons; the stipulation that PIs and performance information should be included in progress reports, justification of additional funding, and finally, to determine the effectiveness of the program. GeoConnections case study was selected for presentation because it employs a management style (Results-based Management Accountability Framework) similar to PBM along with logic models in its performance activities. GeoConnections’ logic model identified four critical areas of the program that required assessment (i.e., user capacity, content, standard and technical infrastructure, and policy and coordination) (Table 2). Also identified were three categories of PIs for measuring the performance of the critical areas. These three categories are similar to PIs1, PIs2 and PIs3 recommended previously (Fig. 3). The logic model also indicated that 22 sets of PIs1 were required to assess the outputs, 20 sets of PIs2 for the expected outcomes, and for 4 sets of PIs3 for the impacts (Table 2). The test for The Framework is to design the 46 sets of PIs required to measure the performances of the critical areas of GeoConnections’ program. The application was carried out as recommended in an iterative format with the use of tables and

Please cite this article in press as: Giff, G. A. & Crompvoets, J. Performance Indicators a tool to Support Spatial Data Infrastructure assessment. Computers, Environment and Urban Systems (2008), doi:10.1016/j.compenvurbsys.2008.08.001

ARTICLE IN PRESS 8

G.A. Giff, J. Crompvoets / Computers, Environment and Urban Systems xxx (2008) xxx–xxx

Table 2 GeoConnections’ critical areas and number of PIs per category Critical area

User capacity Content Standard and technical infrastructure Policy and coordination

Categories of PIs Efficiency (PIs1) Outputs

Effectiveness (PIs2)

Effectiveness (PIs3) Impacts

Outcomes (1–3 years)

Outcomes (3–5 years)

6 8 4

3 7 2

2 2 1

1 1 1

4

2

1

1

was divided into two segments. Firstly, PIs1 were designed by including into The Framework factors that affect the measuring of efficiency (Table 3). Secondly, PIs2 and PIs3 were designed with the inclusion of variables specific to the measuring of the effectiveness levels of the program’s performance (Table 4). Table 3 illustrates the usage of tables and to a limited extent the iterative processes involved in the application of the Framework to

the design of PIs1. From the table the initial application tracked the effects of goals, inputs, and outputs on the design of efficiency PIs. The second and successive iterations focused on more global influential factors that affect the design of PIs1. This procedure was repeated—refining the PIs—until the most feasible PIs (informative, measurable, cost-effective PIs) were designed. The aim of Table 4 is to facilitate the design of effectiveness PIs and thus, factors with the most influence on PIs2 are included in this table. Again, the iterative process was applied with every iteration resulting in a more specific definition of the PIs. PIs were eliminated if the influence of one or more variables in the table renders them unverifiable difficult to validate or to costly. The iterative process was completed when an activity had a maximum of two SMART scientifically sound PIs (GeoConnections’ requirement). On average, this feat was achieved after six iterations. 7.2. Fredericton’s case study Similar to other SDIs the City of Fredericton GIS (a local SDI) is facing the undaunting task of justifying its existence to both government and the public. In addition to measuring performance to justify its existence the City is also interested in improving the

Table 3 A snapshot of the activities involved in the development of efficiency PIs Variables used in the development of PIs to measure efficiency (PIs1) Goals/objectives

Inputs

Outputs

To increase user capacity by creating an environment where Geo-Information is easily accessible for reuse

Datasets with metadata, standards, web portals, awareness activities, policies, etc.

New applications that use geomatics and the CGDI to meet user requirements

Efficiency PIs 1. The number of new applications that uses the CGDI datasets 2. The number of new applications using CGDI datasets that satisfies users’ requirements 3. Change in cost of new applications when using the CGDI

Nth iteration Outputs

Factors

New applications that use geomatics and the CGDI to meet user requirements

Efficiency PIs

Internal factors User communities

External factors Externalities

Knowledge of the users’ demands (e.g., type and quality of products)

Product information from application developers, data collection cost and users willingness to participate

The number of new relevant applications that uses the CGDI datasets efficiently

Table 4 A snapshot of the activities involved in the development of effectiveness PIs Variables used in the development of PIs to measure effectiveness (PIs2) Outputs

Outcomes

Authoritative, highly available technical infrastructure for data discovery, data access, data exchange and security (e.g., Discovery portal, data access portals, web services, etc.)

Stakeholders are able to achieve operational 1. Percentage changes ( ive or +ive) in the cost of producefficiencies resulting from using the tion/collection of datasets or specific applications when evolving infrastructure services using the services of the CGDI 2. The number of stakeholders that reports positive changes (greater efficiency) in their business operations due to the services of the technical infrastructure 3. The number and types of changes demanded by stakeholders that will facilitate them operating at an optimal level that were implemented

Nth iteration Outcomes Stakeholders are able to achieve operational efficiencies resulting from using the evolving infrastructure services

Effectiveness PIs

Internal factors Stakeholders

External factors Externalities

Stakeholders’ activities (e.g., CGDI awareness, retooling, partnerships, benefits of Geo-Information in decision-making, application development, etc.)

Status of the supporting infrastructure, quality and availability of provincial and local datasets, different access policies for CGDI datasets, and the availability of baseline information

Effectiveness PIs 1. Percentage changes ( ive or +ive) in the cost of datasets for specific applications when using the services of the CGDI 2. Percentage changes ( ive or +ive) in the time it take to acquire datasets when using the services of the technical infrastructure

Please cite this article in press as: Giff, G. A. & Crompvoets, J. Performance Indicators a tool to Support Spatial Data Infrastructure assessment. Computers, Environment and Urban Systems (2008), doi:10.1016/j.compenvurbsys.2008.08.001

ARTICLE IN PRESS 9

G.A. Giff, J. Crompvoets / Computers, Environment and Urban Systems xxx (2008) xxx–xxx

quality of the products and services supported by its SDI. To achieve this goal, systems were put in place to measure the performance of the SDI. See Giff and Lunn (2008) for more details. In keeping with the concept of operating efficiently and effectively the City embarked on becoming certified by the International Organisation of Standardization (ISO) and achieved the goal of an ISO 9001:2000 registered organisation in May 2004. Since ISO certification is an ongoing process the City is constantly investigating techniques to improve the methodologies used to measure performance. This requirement to improve the assessment process encouraged the City to participate in the case study. Before the Framework could be utilized in the design of PIs for the City’s SDI, a logic model of the initiative had to be created and analyzed. As part of their ongoing assessment process, the City currently uses a Management reference Model and an ISO reference Model as their version of a logic model. These models were modified and used in the identification of the critical areas to be assessed. Three critical areas were identified; firstly, the Data Component which consists of the different data layers. Secondly, the Access Component which includes the tools and policies necessary for access and utilization of the SDI. Thirdly, the Service Component that includes, for example, the support services, consultancy services and training services. Following the identification of the critical areas, operational statements were assigned to the outputs, outcomes and impacts. The Framework was applied in an iterative manner, using tables as a visualization tool to track the effects of the different factors (Tables 5 and 6). Table 5 is a condense version of the iterative processes involved in the application of the Framework to the design of PIs to measure the performance of the Google Transit tool, an outcome of the Data Component. The table illustrates the application of the Framework in tracing the association of the inputs, outputs, outcomes and the factors that influence the design of effectiveness PIs. The final iteration—the Nth iteration in the table—considers the cost and data collection methodologies for the PIs to determine their economic viability.

Table 6 provides a concise schema of the application of the Framework to the design of PIs to measure the performance of CARIS SFE/OpenCite GIS Viewer an output of the Access Component. The table takes the reader through a simplified version of the iterative processes used to design PIs for this outcome. The number of iterations will depend on the depth of information the PIs are expected to provide, data collection cost and the complexity of the outcome. 8. Analysis of the results Analysis of the results indicated that the Framework provide well defined steps for the design of SDI specific PIs. The steps of the Framework provide users with a structured guide through the maze of not only PI design but SDI assessment in general. Another significant benefit identified from the analysis is the fact that the Framework provides the organisations investigated with a tool that engages the users to clearly think through the processes involved in the provision of spatial information products and services. This is important as outcomes and impacts can be abstract or qualitative and thus requires rigorous methodologies to assist in the capturing of variables—particularly casual relationships— that affect the design of PIs to measure their performances. In following the steps of the Framework the user is able to identify potential PIs and refine these PIs to the point where they may be considered as KPIs. That is, the iterative processes facilitate the constant up grading of the PIs. In addition, the application of the Framework to PI design promotes a greater level of interaction and communication between the GI stakeholders and the endusers. The results of the case studies support the hypothesis that the Framework is capable of assisting in the design of PIs for different levels of SDIs. This is an important feature as SDIs are in part unique and implemented at different levels of the society. Table 7 provides a summary of the strengths and weaknesses of the Framework.

Table 5 A snapshot of the activities involved in the design of effectiveness PIs for the data component Variables used in the development of PIs to measure effectiveness Critical area

Inputs

Outputs

Outcomes

Data component

Man hours, electricity, hardware, software, datasets, office space

Digital transit routes and bus stop locations

The application of Google Transit tool to calculate appropriate trips

Effectiveness PIs

Second iteration Outcomes

Stakeholders

Externalities

The application of Google Transit tool to calculate appropriate trips

Stakeholders’ knowledge (e.g., type and quality of data, features of the required tools, trip display required, hardware, etc.)

Supporting infrastructures (internet speed, broadband quality, etc.), users’ knowledge and skills in using technology, and the societal effects of efficient trip plans

Cost

Method of collection

1. The number of user utilizing the tool 2. Number of products and services produced using the tool 3. Savings in trip planning from usage of the tool 4. Changes in business models due to usage of the tool

Nth iteration Effectiveness PIs 1. The number of user utilizing the tool 2. Number of products and services produced using the tool 3. Savings in trip planning from usage of the tool 4. Changes in business models due to usage of the tool

1. Low, require limited additional man hours 2. Medium to high, require extensive surveys for accurate results 3. Medium, require additional man hours to perform research 4. Medium to high, will require the collection of specialized information

1. Software supported by users surveys and potential user surveys 2. Research and extensive user surveys 3. software, research and user surveys 4. User surveys, research, and bench marking

Final PIs 1. The percentage of possible users utilizing the tool in regular trip planning 2. The number businesses that have changed their models due to the application of the tool

Please cite this article in press as: Giff, G. A. & Crompvoets, J. Performance Indicators a tool to Support Spatial Data Infrastructure assessment. Computers, Environment and Urban Systems (2008), doi:10.1016/j.compenvurbsys.2008.08.001

ARTICLE IN PRESS 10

G.A. Giff, J. Crompvoets / Computers, Environment and Urban Systems xxx (2008) xxx–xxx

Table 6 A snapshot of the activities involved in the design of effectiveness PIs for the access component Variables used in the development of PIs to measure effectiveness Critical area

Inputs

Outputs

Outcomes

Access component

Man hours, electricity, hardware, software, office space

CARIS SFE/OpenCite GIS Viewer

Viewing and searching of spatial information to facilitate public sector decision-making

Outcomes

Internal factors Stakeholders

External factors Externalities

Effectiveness PIs

Viewing and searching of spatial information to facilitate public sector decision-making

Stakeholder s’ Knowledge (type of searching required, viewing required to support decision-making, and speed), availability of software to search and viewing activities, and the functionality of these software

Supporting infrastructures, demands of the society on public sector bodies, and the number of public sector bodies using spatial information in their decisionmaking process

Cost

Method of collection

Second iteration

1. The number of organisations using the viewer 2. Percentage of decisions that are made with the aid of the viewer 3. The improvement in decision-making due to the application of the viewer

Nth iteration Effectiveness PIs 1. The number of organisation using the viewer 2. Percentage of decisions that are made with the aid of the viewer 3. The improvement in decision-making due to the application of the viewer

1. Low (cost of software and limited man hours) 2. Medium due to the additional man hours required 3. High, require extensive research

Table 7 Summary of the analysis of the Framework Strengths

Weaknesses

Valuable in providing a logical guide to the design of SDI specific PIs

Not simple enough for user with basic knowledge of PIs to follow on their own Users require high-level knowledge of these relationships for the framework to be effective It does not stipulate to the users when to stop the iteration

Facilitate the injection of casual relationships into the design process The iterative process it promotes facilitates the constant up grading of the PIs It forces users to think through the processes involved in the provision of spatial information related products and services Supports interaction amongst stakeholders and end-users Capable of accessing different levels of SDIs

Does not provide a guide on the formulation of the processes involved in the provision of SI. Relies on the logic model Relies heavily on the cooperation of the stakeholders, practitioners and end-users Does not address in detail the cost involved in the design, collection and analysis of performance information

The case studies not only identified the benefits of the Framework but also some weaknesses. The main aspects of the Framework that did not live up to the expectations of the SDI community are the observations that it was not simplistic enough to be manipulated by new users without the guidance of someone with prior usage knowledge; it does not guide the users through the formulation of the key processes involved in the provision of spatial information; and finally, the Framework tends not to stipulate to the users when to stop the iterative process. In summary, the Framework is not a tool that a new user can easily step into and quickly derive results. However, once mastered it is a powerful tool to help in the design of PIs as it provides the

1. Automated with support from user surveys 2. Research, user surveys and bench marking 3. Large scale research of users and end-users community

Final PIs 1. The percentage of public sector organisation using the viewer 2. Percentage of key decisions that are made with the aid of the viewer

users with a guide on where to begin, how to structure the task of PI design and a concise summary of steps to follow when designing different categories of PIs. 9. Conclusion The Framework presented in this paper is the result of ongoing research in SDI assessment methodologies. The research indicates that there is a need for multiple methodologies to assess SDIs and the Framework is just one plausible methodology. Although the Framework demonstrates that it is a competent SDI assessment methodology, additional work and testing is still necessary in order to address the weaknesses identified by the case studies. Further testing is needed in particular in the application of the Framework to the assessment of state level SDIs. The test carried out on the two state level SDIs (Service New Brunswick and Land Information Ontario) were part inconclusive as the nature/structure and objectives of these SDIs were different and thus, making it very difficult to do a meaningful comparison or analysis. The authors also recommend that research should be performed on the usage of simulation models to support the application of the Framework. Initial investigation suggests that simulation models may assist in speeding up the iterative process, provide a more concise interaction of the variables and better define the end of the iterative process. For examples of the application of simulation models to SDIs see Giff, 2005; Giff & Coleman, 2005; Halsing, Bernknopf, & Theissen, 2004a, 2004b. The research established that there is a need to assess SDIs not only to justify expenditure on their implementation but also to determine whether or not they are achieving their objectives. The paper presented and reviewed three key concepts that can be used to improve the efficacy of SDI assessment. Firstly, the paper recommends that SDIs are coordinated and implemented within a PBM style to facilitate comprehensive assessment. Secondly,

Please cite this article in press as: Giff, G. A. & Crompvoets, J. Performance Indicators a tool to Support Spatial Data Infrastructure assessment. Computers, Environment and Urban Systems (2008), doi:10.1016/j.compenvurbsys.2008.08.001

ARTICLE IN PRESS G.A. Giff, J. Crompvoets / Computers, Environment and Urban Systems xxx (2008) xxx–xxx

the notion of a Multi-View Framework to aid in the selection of appropriate methodology(s) for SDI assessment. Finally, the concept of PIs as an accountability assessment tool for SDIs was reviewed and a Framework for assisting in the design of SDI PIs presented. From the review of PIs and their value in the assessment process it is reasonable to foresee PIs playing a significant role in the assessment of the next generation of SDIs. This is evident from its characteristics identified in the paper that are favourable to SDI assessment. In support of this, the paper presented a Framework for developing PIs for SDI assessment. Tests on the Framework indicates that it provides logical steps to PI design and serve the purpose of structuring the processes involved in the provision of spatial information products and services. In concluding, it must be stated that similar to an SDI the development of indicators to assess its performance is also complex. Therefore, for a Framework of this nature to be effective in SDI assessment it must not only capture the variables contributing to the complexity of an SDI but also the methodologies, cost, personnel and intricacies involved in the collection of SDIs’ performance information. Acknowledgements The authors acknowledge the following persons and organisations for their contributions to the research: Bastiaan ven Loenen and Jaap Zevenbergen of Delft University of Technology, Rebecca Last, the Dutch Bsik Programme ‘Space for Geo-Information’, the GeoConnections Secretariat Canada, and the City of Fredericton. References Bickman, L. (1987). The functions of program theory. In L. Bickman (Ed.). Using program theory in evaluation, new directions for program evaluation (Vol. 33, pp. 5–18). San Francisco, CA: Jossey-Bass Publishers. Burby, J. (2005). Defining key performance indicators. In ClickZ experts. . Last Accessed April 2008. (CDCP) Centers for Disease Control and Prevention (1999). Framework for program evaluation in public health. In MMWR 1999 (Vol. 48, No. RR-11). Chelimsky, E. (1997). The coming transformations in evaluation. In E. Chelimsky & W. R. Shadish (Eds.), Evaluation for the 21st century: A handbook (pp. 1–29). Thousand Oaks, CA: Sage Publications. (CHN) Child Health Network for greater Toronto (2001). A performance evaluation framework for the child health network: Background discussion paper. A CHN discussion paper, Toronto, Ontario, Canada. (CMIIP) The Committee On Measuring Improving Infrastructure Performance (1995). Measuring and improving infrastructure performance. Washington, DC: National Academy Press. Coffman, J. (1999). Learning form logic models: An example of a family/school partnership program. A Harvard Family Research Project report, Harvard Family Research Project, Cambridge, MA, USA. Crompvoets, J. (2006). National spatial data clearinghouses, worldwide development and impact. PhD-dissertation, Wageningen University, 128pp. Crompvoets, J., Bregt, A., Rajabifard, A., & Williamson, I. (2004). Assessing the worldwide developments of national spatial data clearinghouses. International Journal of Geographical Information Science, 18(7), 665–689. Crompvoets, J., & Grus, L. (2007). Workshop report ‘Multi-view framework to assess (National) Spatial Data Infrastructures’, May 2007, Wageningen, The Netherlands. . Last Accessed August 2008. De Man, W. H. E. (2006). Understanding SDI: Complexity and institutionalization. International Journal of Geographical Information Science, 20(3), 329– 343. Delgado Fernández, T., & Crompvoets, J. (Eds.), (2007). Infraestructuras de Datos Espaciales en Iberoamérica y el Caribe. IDICT, Habana, Cuba, 213pp. (in Spanish). Delgado-Fernandez, T., Lance, K., Buck, M., & Onsrud, H. J. (2005). Assessing SDI readiness index. In Proceedings of the eighth international conference on global spatial data infrastructure, April 2005, Cairo, Egypt. Deloitte & Touche (2002). National critical infrastructure evaluation criteria. A report prepared for the Office of Critical Infrastructure Protection and Emergency Preparedness. Environment Canada (2000). Manager’s guide to implementing performance-based management. An Environment Canada report, Ottawa, Ontario, Canada. Georgiadou, Y., Rodriguez-Pabón, O., & Lance, K. T. (2006). SDI and e-Governance: A quest for appropriate evaluation approaches. URISA Journal: Journal of the Urban and Regional Information Systems Association, 18(2), 43–55.

11

Giff, G. (2005). Conceptual funding models for spatial data infrastructure implementation. PhD-dissertation, University of New Brunswick. Giff, G. (2006). The value of performance indicators to spatial data infrastructure development. In Proceeding of GSDI 9 conference, November 2006, Santiago, Chile. Giff, G. (2008a). The application of Performance Indicators to SDI/GIS Assessment. In Proceedings of GITA conference on geospatial infrastructure solutions, Seattle, March 2008, Washington, USA. Giff, G. (2008b). A framework for designing performance indicators for spatial data infrastructure assessment. In J. Crompvoets, A. Rajabifard, B. Van Loenen, & T. Delgado Fernández (Eds.), A multi-view framework to assess spatial data infrastructures. Melbourne: Melbourne University Press. Giff, G., & Coleman, D. (2005). Using simulation to evaluate funding models for SDI implementation. In Proceedings of FIG working week 2005 and GSDI-8 conference, April 2005, Cairo, Egypt. Giff, G., & Lunn R. (2008). Designing performance indicators for local SDI assessment: A city of Fredericton case study. In Proceedings of GSDI 10 conference on SDI, February 2008, Port of Spain, Trinidad & Tobago. Grus, L., Bregt, A., & Crompvoets, J. (2006). Report of the workshop ‘‘Exploring Spatial Data Infrastructures”. January 2006, Wageningen, The Netherlands. (GSA) General Services Administration Office of Government wide Policy (2000). Performance-based management: Eight steps to develop and use information technology performance measures effectively. A General Services Administration Office of Government wide Policy report, Washington, DC, USA. Hale, J. (2003). Performance-based management: What every manager should do to get results. November 2003, Pfeiffer. Halsing, D., Bernknopf, R., & Theissen, K. (2004a). The national map: Benefits at what cost? Geospatial Solution. February 01, 2004. Halsing, D., Bernknopf, R., & Theissen, K. (2004b). A cost-benefit analysis of the national map. Geological Survey, Reston, Virginia, USA. . Last Accessed April 2008. Innovation Network (2005). Logic model workbook, A component of innovation network’s logic model and evaluation training materials. . Last Accessed June 2007. Kanungo, S., Duda, S., & Srinivas, Y. (1999). A structured model for evaluating information system effectiveness. System Research and Behavioural Science, 16, 495–518. Kok, B., & Van Loenen, B. (2005). How to assess the success of National Spatial Data Infrastructures? Computers, Environment and Urban Systems, 29, 699–717. Lance, K. (2007). SDI performance measurement as a function of budgeting processes. In Proceedings of the workshop on multi-view framework for assess national spatial data infrastructures, May 2007, Wageningen, The Netherlands. Lawrence, D. (1998). Benchmarking infrastructure enterprises. In M. Arblaster, & M. Jamison (Eds.), Infrastructure regulation and market reform; principles and practice, Canberra, Australia (pp. 55–67). Long, L. (1989). Management information systems. Englewood Cliffs, NJ: PrenticeHall. Masser, I. (1998). The first generation of national geographic information strategies. In Proceedings of third GSDI conference, November 1998, Canberra, Australia. Masser, I. (1999). All shapes and sizes: The first generation of national spatial data infrastructures. International Journal of Geographical Information Science, 13, 67–84. Masser, I. (2005). GIS worlds: Creating spatial data infrastructures. Redlands, CA, USA: ESRI Press. McNamara, C. (1999). Performance management: Performance plan. Free Management Library. . Last Accessed April 2008. (NCIAP) The National Critical Infrastructure Assurance Program (2004). Asset criteria. . Last Accessed April 2008. (NPR) National Partnership for Reinventing Government (1997). Serving the American public: Best practices in customer-driven strategic planning. . Last Accessed April 2008. (NRC) National Research Council (2007). Assessment of the NASA applied sciences program. Washington, DC: National Academies Press. OAGA (1999). OAG audit standard: The audit of performance indicators. West Perth, Australia: OAGA. . Last Accessed April 2008. (OPM) Office of Public management New South Wales (1990). Health improvement/ health service planning kit. New South Wales, Australia: OPM. . Last Accessed April 2008. Patton, M. (1989). A context and boundaries for theory-driven approach to validity. Evaluation and Program Planning, 12, 375–377. Paull, D. (2004). Spatially enabling Australia through collaboration and innovation. PSMA Australia Limited, Griffith, ACT, Australia. . Last Accessed October 2007. (PBM SIG) Performance-Based Management Special Interest Group (2001a). The performance-based management handbook. Establishing and maintaining a performance-based management program (Vol. 1). Oak Ridge, TN, USA: US Department of Energy and Oak Ridge Associated Universities. (PBM SIG) Performance-Based Management Special Interest Group (2001b). The performance-based management handbook. Establishing an integrated performance measuring system (Vol. 2). Oak Ridge, TN, USA: US Department of Energy and Oak Ridge Associated Universities.

Please cite this article in press as: Giff, G. A. & Crompvoets, J. Performance Indicators a tool to Support Spatial Data Infrastructure assessment. Computers, Environment and Urban Systems (2008), doi:10.1016/j.compenvurbsys.2008.08.001

ARTICLE IN PRESS 12

G.A. Giff, J. Crompvoets / Computers, Environment and Urban Systems xxx (2008) xxx–xxx

PROMATIS (2006). PCM business performance management. Germany: PROMATIS software GmbH Ettlingen. (PSMO) Public Sector Management Office Western Australia (1997). Preparing performance indicators: A practical guide. Perth, Western Australia. . Last Accessed April 2008. Rajabifard, A. (2002). Diffusion of regional data infrastructures: With particular reference to Asia Pacific. PhD-dissertation, The University of Melbourne. Rajabifard, A., Binns, A., Masser, I., & Williamson, I. (2006). The role of sub-national government and the private sector in future spatial data infrastructures. International Journal of Geographical Information Science, 20(7), 727–741. Reh, J. (2005). Key performance indicators must be key to organizational success. . Last Accessed December 2007. Rogers, P. (2005). Evaluating complicated and complex programs using theory of change. The Evaluation Exchange, XI(2), 13–14. (SADL) Spatial Application Division, Catholic University of Leuven (2005). Spatial data infrastructure in Europe: State of play during 2005. A summary report of activity 5 of a study commissioned by the EC (EUROSTAT & DGENV) in the framework of the INSPIRE initiative. . Last Accessed November 2007. Schmitz, C., & Parsons, B. A. (1999). Everything you ever wanted to know about logic models but were afraid to ask. W.K. Kellogg Foundation. . Last Accessed June 2007.

Scriven, M. (1983). Evaluation ideologies. In G. F. Madaus & D. L. Stufflebeam (Eds.), Evaluation models viewpoints on educational and human services evaluation (pp. 229–260). Boston, USA: Kluwer-Nijhoff. Steudler, D. (2003). Developing evaluation and performance indicators for SDIs. In I. Williamson, A. Rajabifard, & M. E. F. Feeney (Eds.), Developing spatial data infrastructures: From concept to reality (pp. 235–246). London: Taylor & Francis. Stewart, C. (2006). Results-based management accountability framework. GEOIDE/ GeoConnections workshop on value/evaluating spatial data infrastructures, Ottawa, Ontario, Canada. Taylor-Powell, E. (1996). Logic models to enhance program performance. . Last Accessed June 2007. Taylor-Powell, E. (1999). Evaluating collaboratives: Challenges and practice. The Evaluation Exchange, V(2–3), 6–7. (TRADE) Training Resources and Data Exchange (1995). How to measure performance: A handbook of techniques and tools. Oak Ridge, TN, USA: US Department of Energy Defence Program, Oak Ridge Associated Universities. Vandenbroucke, D. (2007). INSPIRE directive: Specific requirements to monitor its implementation. In Proceedings of the workshop on multi-view framework for assess national spatial data infrastructures, May 2007, Wageningen, The Netherlands. (WHO) World Health Organization (2000). Tools for assessing the O&M status of water supply and sanitation in developing countries. Geneva, Switzerland: World Health Organization.

Please cite this article in press as: Giff, G. A. & Crompvoets, J. Performance Indicators a tool to Support Spatial Data Infrastructure assessment. Computers, Environment and Urban Systems (2008), doi:10.1016/j.compenvurbsys.2008.08.001