evaluation: current strengths and future directions

4 downloads 1155 Views 786KB Size Report
TamMatrix@mac.com. [email protected]. CSQ. ... Twitter: @CSQinfo ... and client goals can range across many different content areas. These content.
EVALUATION: CURRENT STRENGTHS AND FUTURE DIRECTIONS Authors: C. Clifford Attkisson, William A. Hargreaves, Mardi J. Horowitz, & James E. Sorensen. Source: Evaluation of human service programs. New York: Academic Press, 1978, pp. 465-477. Copyrighted material. Do not use or reproduce without permission.

Clifford Attkisson, Ph.D. President & CEO TAMALPAIS MATRIX SYSTEMS, LLC 660 Amaranth Boulevard Mill Valley, California 94941-2605 USA [email protected] [email protected] [email protected] www.CSQscales.com 415-310-5396 (Mobile/Voice Mail) 415-381-0242 (Office/no voice mail) 339-440-9537 (Fax) 866-770-4975 (Fax U.S. Domestic Toll Free) Twitter: @CSQinfo  

Reprinted From: EVALUATION OF HUMAN SERVICE PROGRAMS © 1978, Academic Press, Inc. New York San Francisco London

16 EVALUATION: CURRENT STRENGTHS AND FUTURE DIRECTIONS C. Clifford Attkisson, William A. Hargreaves, Mardi J. Horowitz, and James E. Sorensen

In this chapter we present an overview of the current state of the art in human service program evaluation. Since each of the preceding chapters represents a specific summary, our emphasis at this point will be to review the current status of four general domains of program evaluation (structural, process, outcome, and community impact) and to review the status of major evaluation methods that are now available for general use (information systems, need assessment, and outcome assessment). In addition we assess what is now known about optimal organizational role functions for evaluators and venture some thoughts about evaluator roles of the future. Our comments are organized to respond to four important questions that must be raised about contemporary program evaluation: What kinds of evaluation activities can now be expected to make a useful contribution to program management? Is there consensus about how evaluation activities should be organized and implemented? What role expectations will the future evaluator face? What methodologies and guidelines will be available in the future? In a concluding section we will suggest several goals on which we might focus our attention so that future program evaluation will be more effective and useful. THE STATE OF THE ART

We know that useful evaluation cannot proceed without consensus on program goals. When program goals are so controversial or so vague that no 465

466

C. Clifford Attkisson at al.

workable measures of their accomplishment can be defined, then evaluation is not possible. Legislators and administrators are slowly beginning to understand this crucial step in the evaluation process. Because realistic statements of goals and objectives occur so rarely in human service systems, much evaluation must be "goal free" by default. Fortunately, even goal free evaluation seems to stimulate more appropriate attention to goal setting and the specification of intended program results in objective, measurable language. Program and client goals can range across many different content areas. These content areas tend to define what type of evaluation is undertaken and the evaluation methods that are employed. Therefore, as the evaluator undertakes to assess the attainment of program objectives, choices must be made as to the main focus of evaluation. Evaluative attention is ordinarily distributed across structural, process, outcome, and community impact objectives. Structural objectives are often identified in regulations affecting the program (e.g., requirements and standards related to availability of various types of services, adequacy of facilities, staffing patterns, personnel systems, accounting systems, case records), but the monitoring of compliance with these structural requirements is usually handled by a business manager or program administrator. Recently, however, program evaluators have become increasingly involved in evaluating structural compliance. These activities are best conceived as extensions of traditional accounting and administrative approaches to accountability and often can have a powerful impact on organizational structure and service capacity. Evaluation studies of program process and effectiveness often suggest innovations or problem solutions that require changes in program structure. To ensure effective implementation of program decisions that have grown out of evaluation activities, the evaluator may work closely with other program staff to install needed changes and to monitor their effects. Evaluative effort is frequently devoted primarily to process monitoring. For example, the evaluator may examine demographic characteristics of clients served in order to assess the attainment of accessibility goals, may monitor service activity levels to contribute to cost-finding by linking costs to effort and other resources (the goal of efficiency), may monitor the referral of clients from one level of care to another (the goal of continuity of care), and may contribute to utilization review and other quality assurance activities (the goals of appropriateness and quality of services in relation to existing process standards). These activities are all examples of process evaluation. In process monitoring the evaluator carries out at least two distinct roles: (a) improving the information procedures used by the program to assemble process data, and (b) examining these data in relation to current management issues and objectives so that program leaders can make more informed decisions. These two roles are distinct from routine data gathering, storage, and report generation processes. The latter may also fall under the direction of the evaluator, but in larger organizations one increasingly sees this statistical work closely coordinated with accounting, budgeting, and cost-finding in an integrated management information system using the methods described in Chapters 6, 7, and 8.

16. Evaluation: Current Strengths and Future Directions

467

Outcome information is also being introduced into human service management and planning. Outcome evaluation tends initially to focus on routine monitoring using very simple (many would say simplistic) global outcome measures, often in response to reporting requirements imposed by state or federal agencies. The evaluator may then supplement this basic outcome monitoring with time-limited outcome studies involving more intensive assessment of selected samples of clients. These outcome studies usually do not consume a large share of the evaluation effort, especially when pressure to develop routine process and outcome monitoring procedures is competing for evaluative resources. Even in established evaluation units one may see few outcome studies. This is partly because outcome study results are available only after several months or longer, and therefore it is difficult to plan outcome evaluation studies that can be kept relevant to changing management issues. However, lack of attention to outcome also reflects the lack of experience and skill on the part of evaluators and program managers in formulating useful outcome studies. Both of these limiting factors probably account for the tendency toward specific, decision-oriented outcome studies being undertaken only in relatively large programs with well developed program evaluation resources. We expect this pattern to continue; outcome studies will be undertaken mostly in larger programs, but almost all programs will learn to make effective use of routine outcome monitoring. Evaluation of the community impact of programs is still almost nonexistant. This deficiency is significant when viewed in the context of one of the major crises in contemporary service planning and management. This crisis stems from our failure to integrate the human services at the community level, and our failure to present to funders, and to the public, an understandable rationale for the present pattern and cost of the total human service system. A first step would be to develop a workable framework of objectives for regional services systems, in terms that would allow direct monitoring of service adequacy. We also need a widely acceptable method of measuring the degree of human services integration in a community. Such measurement seems possible in principle, although even if we succeed in measuring the degree of service integration it would still be a further research challenge to study the relationship of service integration to service effectiveness and community impact. As we move toward the time when effective regional planning and evaluation will be possible, evaluators can contribute to service integration by laying the groundwork in their own organization. This can be accomplished by developing methods of evaluating service integration, as well as developing linkages between functional elements or components within their organization. We also encourage evaluators to support the process of regional planning, and thereby attend to the linkage of their own agency with the rest of what some have called the current "nonsystem" of human services. Community planning and service integration can be conceptualized as essentially process measures and methods within the community impact focus of program evaluation. Beyond the process measurement issue are the problems of determining the community impact effects of a specific program or a

468

C. Clifford Attkisson et al.

configuration of programs. The interdependency of variables within communities and current methodological limitations preclude meaningful analysis of actual community impact at this time. The needed developmental work will most likely include comparisons among sets of care systems in several different communities, with multiple measures of the characteristics of communities, of care systems, of populations at risk, and of clients before and after various service episodes.

MAJOR EVALUATION TOOLS

Many new evaluation methods are being developed to make evaluation tasks easier, and as a result to make useful evaluation available to an ever widening group of programs. In this volume we presented three important methodological areas: information systems, need assessment, and outcome evaluation. INFORMATION SYSTEMS

If one had to pick the most important technical development in the evaluation field in recent years, it would probably be the concept of the integrated management information system. At this point within the text the reader should realize that one cannot install some ready-made computer package from a mail-order catalog and expect to have a useful tool. An integrated information system within an organization must be achieved, not simply installed. Furthermore, information procedures drift into misalignment and obsolescence if not actively maintained. This is true of every organization's information system, and every organization has one, regardless of the organization's size or whether the organization uses a computer. The important tools now available are the concepts and design procedures that can be employed by the evaluator to review an organization's management information practices to determine whether the attained level of information capability meets internal and external information requirements as cost—effectively as possible. Evaluators sometimes feel they are distracted from "real" program evaluation by the organization's urgent needs for improved information procedures, but, as Lund (1) has reported, such "diversions" can lead to valuable improvements in the organization's management capability while also providing valuable lessons for the evaluator. NEED ASSESSMENT

Need assessment is the effort to determine the appropriate mix of human services for a community and to detect important gaps in those services. The program evaluator often is called upon to help with need assessment, and indeed there are some relatively simple approaches that have been found useful in program planning and in strengthening requests for funding. Chapters 9 and 10 discussed available methods such as social and health indicator analyses, social area surveys, and community group approaches. No one of these ap-

J

16. Evaluation: Current Strengths and Future Directions

469

proaches can provide definitive conclusions about needs, but taken together they blend available methods for gathering data with essential political processes to converge on informed decisions about service priorities. Many of the approaches are useful across the entire range of human services planning. There is an emerging consensus that need assessment and planning should be coordinated within geographic regions for all health and other human services, rather than expecting narrowly defined categorical programs with their vested interests to plan to fill gaps or avoid excessive duplication. New legislation and organizational efforts are attempting to strengthen regional planning, and the evaluator can sometimes gain useful help on need assessment from such agencies. ' OUTCOME EVALUATION Outcome assessment tools have also seen major development in recent years. This reflects a growing consensus about the service objectives in many types of human service programs. Earlier arguments about which outcome dimensions represent the "real" primary objectives have been replaced with the recognition that programs have multiple objectives and the evaluator must attend to them all. For example, mental health programs providing services to patients who display irrational acts are concerned with reducing symptoms, maintaining the social functioning of the person, treating the patient in the least restrictive environment possible, and protecting the community from destructive behavior. There may be conflict about the relative importance of various goals, but multiple perspectives are no longer seen as an insuperable barrier to evaluation—these are facts of life with which the evaluator must cope. Only the total lack of objectives leads to disaster. The competent evaluator strives to examine the range of service outcomes that are important to all groups that have a vested interest in the program. We do not mean to imply that the major outcome measurement problems have already been solved. One needs to be especially cautious when interpeting any outcome data that do not comprehensively sample the relevant dimensions of outcome and the relevant followup time periods. Clients, especially those treated for psychological problems, may become more aware of problems during the course of treatment. Therapies often include techniques that promote self-recognition, stimulate awareness of conflicting feelings, and encourage problem-solving thought. Such therapies, if successful, may temporarily increase the level of subjective distress in self-reports by clients, or influence ratings by an observer listening to a client's more insightful posttreatment descriptions. For example, some clients defensively avoid recognition of difficult issues before treatment, face issues and even levels of feeling more squarely during treatment, and on rating scales or symptom checklists report more anxiety and depression after rather than before service delivery. On the basis of individualized treatment goals or data on other aspects of the client's functioning, evaluators may be able to distinguish such instances from cases where increased distress is an undesirable outcome. Similarly, a client may attain a treatment goal of becoming more assertive and less compliant with previously dominant relatives or friends. Ratings from these significant others

470

C. Clifford Attkisson at al.

may then give a false negative impression, or seemingly positive reports from relatives may sometimes reflect the failure to attain such a treatment goal. Client satisfaction can also be misleading. Ratings of satisfaction often seem unrealistically high. Nevertheless, a tough service provider who really confronts a client may cause the client some anguish, yet be very helpful in the long run. In the meantime such clients may be angered and rate themselves less satisfied compared to clients who are comforted but helped less in other ways. Finally, both positive changes and relapse may take place only later, after ratings and reports are accumulated, unless long range followup is undertaken. These measurement issues require careful attention when major program decisions are to be based on an outcome study. As agreement evolves regarding the major outcome dimensions and how to measure them, evaluators are moving toward an even more difficult issue —how to combine ongoing outcome monitoring and time-limited outcome studies to obtain the most cost–effective improvement in program management. We see no consensus on this evaluation strategy question as yet, and we encourage evaluators to report their successes and failures in utilizing outcome information. THE ROLE OF THE EVALUATOR

The organizational role of the evaluator has received much attention in the preceding chapters. There seems to be a growing consensus that evaluators make their most effective contribution when they are ongoing members of the decision-making team. The need for evaluator participation at all important administrative levels implies that program evaluation resources are needed at each major administrative level, rather than expecting managers to rely solely on evaluation carried out at other administrative levels. The need for ongoing, direct evaluator participation in administration also implies that managers usually will want to develop an internal evaluation staff oriented to the specific evaluative needs of their program level rather than contract exclusively with outside evaluators. Much current evaluation effort is still focused on one-time summative evaluations of temporary grant-funded service projects, while the ongoing bulk of human services goes unevaluated. This distribution of effort does not match our current understanding of how to maximize the impact of evaluation. Evaluators must be close enough to the decision-making process to grasp clearly the needs and problems of management. If evaluative results are not used under these circumstances, then the evaluator is not doing the job effectively. Evaluators at federal, state, and community levels of program management are making some headway in sorting out their roles in relation to each other. The major clarification is coming from a growing acknowledgement of a single organizing principle—evaluators at each level must identify the most cost–effective way to aid management decisions within their own level of program organization. This principle follows from our conviction that an

16. Evaluation: Current Strengths and Future Directions

471

evaluator is most effective when he or she functions as a part of the decisionmaking group for which the evaluation work is done. While these assertions may seem obvious, if taken seriously they can help prevent common false starts in designing program evaluation strategies. For example, in a statewide reporting system only relatively simple aggregate information is necessary or usable at the state level for such tasks as portraying to the legislature what the budgetary appropriations are buying for the taxpayer. Yet groups that design such systems easily get caught up in their enthusiasm and curiosity and add additional details "that might be nice to know." The result can be a reporting system that is more elaborate, and therefore more expensive, than is justified. State or federal evaluators often suggest that broad scale outcome monitoring can be used to compare programs and identify those that are below standard in effectiveness. There are problems in the utilization of such nonexperimental comparisons, and this area needs further development. Nevertheless, states and the federal government do have the responsibility to evaluate major funding programs and regulatory policies. For this latter purpose a routine reporting system will rarely, if ever, be adequate. Instead, planners must formulate the potential policy alternatives, estimate their relative cost— effectiveness on the basis of existing information (here some data from reporting systems may be useful), and then design specific studies to compare the most promising alternatives. Program evaluation efforts in state level human service agencies can also be directed toward enhancing program evaluation capability at the community facility level. Typical activities include developing evaluation methods and materials, adopting evaluation staffing standards and job descriptions for state institutions, providing consultation and technical assistance, promoting statewide exchange among evaluators, and evaluating local evaluation activities. We have observed several states carrying out this secondary role quite skillfully, but this remains the exception rather than the rule. AN EVALUATOR OF THE FUTURE In a more whimsical mood, what can we anticipate about the life of the evaluator of the future? We claim no special gifts as writers of utopian science fiction, but we do have a view of the future that is based partly on current trends in the field, and partly on our own hopes and trepidations. Our purpose is to encourage the reader to envision other possible futures, and choose a potential future that is worth working toward. Our evaluator of the future will function in an organizational environment where managers and staff are well informed about the role of the evaluator and, based on this knowledge, have well-formed expectations of the program evaluator. Administrators will have experienced the value and the impact of timely management information on their decision tasks, and will expect the management information and evaluation staff to anticipate correctly the occasions and tasks for which these data will be needed.

472

C. Clifford Attkisson et al.

Creative managers will imagine many kinds of information that could be relevant to emerging problems and anticipated decisions. The evaluator can expect to be deluged with information demands that are unpredictable, unorganized, but put forth with great urgency. The evaluator will be expected to quickly evaluate the feasibility and cost of meeting each request, to help the organization leaders prioritize competing information needs, to be able to allocate evaluation effort in a flexible manner, and to meet most of these information needs quickly and with very little expense to the program. The evaluator will be expected to manage this rapidly changing environment without ever losing perspective on the organization's overall operating constraints and its long range objectives. The evaluator will be expected to maintain a core evaluation function that will independently and systematically examine the performance of each program component. From these core activities the evaluator will be expected to come forth with evaluative reviews that are seen by a majority of the staff as thoughtful and fair-minded and yet also seen as directly relevant to key operating difficulties of the organization. The evaluator will be expected to disseminate these reviews so skillfully through participation in organizational management that creative, constructive changes usually follow. The front line staff of the agency of the future will also present different expectations to the evaluator. They will be more accustomed to their unit's productivity and effectiveness being open to public view. They will understand some of the value of this increased visibility, and in a supportive organization they will enjoy the friendly rivalry among work units engendered by the inevitable intra-organizational comparisons. In a nonsupportive organization they may also learn how a poor unit performance record can be used to remove an unpopular supervisor or invoke the wrath of client advocates. The staff will also be accustomed to the routine information needs of the organization because standard data collection procedures will have been in operation for a long time, and will be fairly similar from one organization to another. They will also have experienced competent management information systems in which clerical staff handle every data gathering task that does not require professional judgment, using methods by which data can be recorded simply and quickly, and when once recorded can be retrieved, corrected, or supplemented without wasteful redundancy. They will have come to depend on a variety of information system products to provide handy support and assistance in their work. These products will include client rosters, scheduling assistance, accurate historical records on clients, and summaries of staff effort and productivity. Having experienced this level of information capability, service providers and managers will be impatient with evaluators who allow information systems to drift into cumbersome redundancy and useless irrelevance. We therefore expect evaluators of the future to be more productive, even if no less frustrated or immersed in organizational struggles than they are today. The evaluator's initial education and beginning work experience will have fostered the necessary skill required for rapid recognition of common

16. Evaluation: Current Strengths and Future Directions

473

decision-making situations that call for various types of evaluation activities. The evaluator will have gained carefully supervised experience in working with program managers to negotiate and execute specific evaluation tasks. These skills will be aided by powerful new tools to allow greater evaluator productivity and impact. In larger organizations, computer-based management information systems will have been established and will be taken for granted, but their maintenance and revision will be a persistent problem. The great depth of management information system experience in similar organizations will aid the evaluator, and it should be as easy to call in a consulting systems analyst as it now is to call in an accountant to set up or revise an accounting system. Often the same consultant will do both, in fact. New developments in computer systems will make computer methods cost—effective even in smaller organizations. Information processing hardware will be an integral part of every office, a byproduct of the telephone on the desk, the appointment book, the dictating machine, and the typewriter. Statistical and cost information will be solicited and checked as a continuous byproduct of the ongoing flow of activities. Special attention will be required only when routinely expected information is inconsistent or missing. Computerized editing routines will call for clarification at the time the data are being gathered from the client or other source, so that omissions and errors can be corrected efficiently, and staff will be continuously trained to execute these routine tasks accurately. The highly integrated nature of future management information systems will also enable the evaluator to modify the way information is organized and reported without affecting the activities of the staff who initially record the data. When statistical and cost data are gathered interactively by information hardware, fewer paper forms will be used, so that the addition or deletion of a specific data item will involve reprogramming the system rather than reprinting the form. Reprogramming may not be a simple task, but it will be a task that many future evaluators will be trained to carry out. Simplified management information system programming languages will require a relatively unsophisticated level of programming skill that will be attainable by most evaluators. The same complex negotiation will be needed in order to determine the need for new data collection, and to orient staff to the change, but the interactive system will simplify the task of training the staff for reliable per formance. Flexible data retrieval and analysis for unanticipated needs will be at the evaluator's fingertips. This increased flexibility of the information system will enable the evaluator to respond almost as quickly and economically as decision makers will have come to expect More difficult professional judgments will still be no easier, of course, but quality assurance procedures will provide valuable inservice training. When service activities take advantage of improved information technology for routine record keeping, it will be easier to help professional staff avoid careless oversights and inadvertent misjudgments. As service decisions enter the information system (or fail to enter when expected) they will be compared with well developed utilization review standards. This will allow the professional staff

474

C. Clifford Attkisson et al.

of the organization to monitor service decisions, and to ask for explanations of unusual actions by their colleagues and trainees. Statistical summaries of service practices will be readily available from the automated portion of the client's clinical record, so that a utilization review committee, through its clinical care evaluation studies, can provide vigorous leadership in shaping the system of services delivered by the organization. A highly developed "language of accountability" will change the relationship of future community programs to funding and regulatory agencies. Or ganizations will define the services they offer not primarily by their service process, as at present, but by reference to a standard lexicon of problem definitions and service objectives. Even very unusual programs will be defined by the degree to which these standard terms fit the program, and where they ç differ. Since entire regional human service systems of the future will be planned as an integrated mosaic of problem definitions and service objectives, this language will describe how each organization's services contribute to attaining the overall objectives of the system. These objectives will focus not just on the outcome, productivity, and cost of individual services—they will also include impact objectives that are defined in relation to overall estimated community needs. This regional "language of accountability" will be possible because evaluators already will have developed and tested its components within many community-based human service organizations. Since evaluators typically work in conjunction with managers of multicomponent organizations, rather than single-purpose service providers, they already face a small scale version of the regional planning task. Every human service organization will have become accustomed to defining itself in terms of target problems, outcome objectives, and productivity objectives. Each organizational component will be evaluated against its expected contribution to these organizational service objectives. Managers and professionals at every level will support this accountability structure because they will have experienced the way in which it protects them from the external imposition of inappropriately rigid structure and process constraints. The structure will facilitate planned change because it will provide a common language for the proponents of new approaches, the defenders of established services, and those who must allocate program resources. This language will allow more open and informed discussion of the promise and the performance of both innovative and traditional service components. The evaluator will have a wealth of technical assistance available as choices are made about the methods to be used in measuring the attainment of program objectives. Vigorous psychometric research will have been attracted by the established lexicon of standard objectives, and will shape its continuing revision. We speculate that the Human Services Division of the National Bureau of Standards (formally the National PSRO Network), the Joint Commission on the Accreditation of Human Services (formerly JCAH), and several university-based research groups will have accumulated massive data banks of program and community characteristics in relation to attainment of various subsets of the standard objectives. By following a standard measure-

16. Evaluation: Current Strengths and Future Directions

475

ment protocol, an evaluator will be able to generate a data set that provides a comparison with hundreds of similar programs in similar communities, and allows an assessment of the areas of comparative strength and weakness in his or her own program. We can also imagine the Evaluation Research Society (ERS) periodically voting to reaffirm its stand against the use of these comparisons in decisions about program funding, pointing out the importance of community need and equity of care issues in allocating service funds. [News Item: "In a press conference ERS President Dr. Eve Alooator warned that the proliferation of so-called 'evaluation planning' firms was symptomatic of the possible decay of the whole standards comparison system that has fostered so many improvements in the human services in the last decade. These firms offer, for a tidy sum, to select the standards comparison strategy that will make your program look best. Dr. Alooator's assertions prompted a vigorous rejoiner from the chief of one of the university-based systems. He pointed out that the method used by these new firms was published by their research group more than five years ago, and is incorporated in their standard analysis and report system to identify submitted data sets that probably show the source organization in an unusually good or poor light. This reporter suspects that the controversy will continue, however, since . .

["In that same press conference, President Alooator expressed concern about the weakened professionalism of working evaluators. She cited as evidence for this trend the fact that only 27,000 ERS members dialed in to any of the closed-circuit sessions of the continuous ERS national convention during November, down from a 43,000 monthly peak in that same month four years ago. While acknowledging that more innovative symposia and more intimate conversation hours with nationally known figures might bring up the ratings, Alooator cited a number of other surveys to support her concern about decreased professionalism. It seems that many program directors no longer feel that the six-year DPE degree preparation is really essential. She cited one program chief, Dr. Adam Inistrator, as asserting that any capable program staff member can pick up the necessary skills from the programmed instruction that comes with a good management information package, and use the consulting network as a backup. To use the consulting network an authorized staff member can dial 800-PEV-xxxx (where xxxx is a problem code) and talk to a relevant consultant. Even with one or two calls each week the consulting fees only come to a fraction of the cost of hiring a doctoral level evaluator. President Alooator predicted dourly that the recent antitrust decision in favor of the consulting network will spell the end of professional program evaluation as we know it today."]

476

C. Clifford Attkisson et al.

Actually, the future will probably be sufficiently troublesome without additional help from us. For the present, program evaluation remains a challenging task that cannot be learned from a recipe book or a computer terminal. There is yet to emerge a pattern of program evaluation that one can say with confidence will return its investment in improved program performance. Evaluators have a few years to demonstrate their value. Some of the tools and concepts now being developed show promise when they are used by unusually creative program evaluators and managers. The challenge is to develop routine competence in program management, rather than occasional bursts of light (or heat).

CONCLUSION This overview of the present and the possible future of the art of program evaluation suggests six goals to be addressed in the current decade. • Employ competent program evaluators routinely at all major federal, state, and community levels of management in the human services • Coordinate needs assessment regionally and clarify system objectives for human services as a whole in each geographic region • Clarify the outcome and process objectives of every component of a • regional human service system, including the relation of these objectives to overall system objectives • Maintain vigorous ongoing monitoring, within each major system component, of that component's attainment of outcome and process objectives, and carry out this monitoring in ways that contribute directly to the internal management capability of that component of the human service system • Develop better methods for informative comparisons of program goal attainment across relevant reference groups of similar programs • Develop methods to monitor regional human services integration, as well as other system process and impact objectives, and methods for making relevant comparisons among sets of regions The first goal is being approached as every component of the human services system develops requirements for evaluation tasks and personnel, and as colleges and universities begin to train individuals to fill these new positions. Our final chapter examines several issues faced by these educational programs. The next three goals must continue to be pursued by evaluators working in each community-based program, although technical support provided by state and federal evaluators and their grantees and .contractors can be helpful. In contrast, the primary responsibility for developing inter-program and interregional comparison methods may fall to state, federal, and quasi-private regulatory, accreditation, and quality assurance agencies, with the collaboration of independent research groups. While evaluators in community-based programs

16. Evaluation: Current Strengths and Future Directions

477

are in a poor position to develop inter-program comparison methods, they must contribute to the planning of beginning comparison attempts, if such methods are to be broadly useful. REFERENCE NOTE 1. Lund, D. A. Mental health program evaluation: Where you start? Unpublished manuscript, 1976. Paper presented to the Florida Mental Health Program Evaluation Training Workshop, Tampa, Florida, July 1976. (Available from the Bureau of Program Evaluation, New York State Department of Mental Hygiene, 44 Holland Avenue, Albany, New York 12229.)