of Political and Social Science The ANNALS of the ...

7 downloads 793 Views 453KB Size Report
Aug 10, 2014 - Performance Funding for Higher Education: Forms, Origins, Impacts, ...... In Indiana, several interviewees discussed an Ivy tech system-wide ...
The ANNALS of the American Academy of Political and Social Science http://ann.sagepub.com/

Performance Funding for Higher Education: Forms, Origins, Impacts, and Futures Kevin J. Dougherty, Sosanya M. Jones, Hana Lahr, Rebecca S. Natow, Lara Pheatt and Vikash Reddy The ANNALS of the American Academy of Political and Social Science 2014 655: 163 DOI: 10.1177/0002716214541042 The online version of this article can be found at: http://ann.sagepub.com/content/655/1/163

Published by: http://www.sagepublications.com

On behalf of:

American Academy of Political and Social Science

Additional services and information for The ANNALS of the American Academy of Political and Social Science can be found at: Email Alerts: http://ann.sagepub.com/cgi/alerts Subscriptions: http://ann.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav

>> Version of Record - Aug 10, 2014 What is This?

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

541042ANN research-article2014

THE ANNALS OF THE AMERICAN ACADEMYPerformance Funding for Higher Education

Performance Funding for Higher Education: Forms, Origins, Impacts, and Futures

By Kevin J. Dougherty, Sosanya M. Jones, Hana Lahr, Rebecca S. Natow, Lara Pheatt, and Vikash Reddy

Since the 1970s, federal and state policy-makers have become increasingly concerned with improving higher education performance. In this quest, state performance funding for higher education has become widely used. As of June 2014, twenty-six states were operating performance funding programs and four more have programs awaiting implementation. This article reviews the forms, extent, origins, implementation, impacts (intended and unintended), and policy prospects of performance funding. Performance funding has become quite widespread with formidable political support, yet it has also experienced considerable implementation vicissitudes, with many programs being discontinued and even those that have survived encountering substantial obstacles and unintended impacts. Although evidence suggests that performance funding does stimulate colleges and universities to substantially change their policies and practices, it is yet unclear whether performance funding improves student outcomes. The article concludes by advancing policy recommendations for addressing the implementation obstacles and unintended side effects associated with performance funding. Keywords: accountability; performance funding; performance accountability; state policy; higher education policy

S

ince the 1970s, federal and state policymakers have become increasingly concerned about improving the performance of higher education institutions. In their quest for improved performance, state policy-makers Kevin J. Dougherty is an associate professor of higher education and education policy at Teachers College, Columbia University. He has published widely on performance funding for higher education, the origins and impacts of community college, transfer between community colleges and four-year colleges, and the economic development role of community colleges. Sosanya M. Jones is an assistant professor at Southern Illinois University–Carbondale. She teaches qualitative research methods and higher education. Sosanya’s research focuses on diversity policies, leadership, and student retention. One of her latest articles, “Diversity DOI: 10.1177/0002716214541042

ANNALS, AAPSS, 655, September 2014 163 Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

164

THE ANNALS OF THE AMERICAN ACADEMY

have often turned to performance funding. Performance funding connects state appropriations directly to a college’s performance on outcomes such as student retention, graduation, transfer, and job placement. As of June 2014, twenty-six states were operating performance funding programs, four more had adopted them and were awaiting implementation, and several more were considering such a program (Dougherty and Reddy 2013; Friedel et al. 2013; National Conference of State Legislatures 2014). To inform understanding of the widespread interest in performance funding, this article reviews its forms, extent, origins, implementation, impacts (intended and unintended), and policy prospects. This review documents that performance funding began as a fairly isolated policy innovation but became quite widespread with formidable political support. Yet at the same time, performance funding has experienced considerable implementation vicissitudes, with many programs being discontinued and even those that have survived encountering substantial obstacles. And although evidence suggests that performance funding does stimulate colleges and universities to substantially change their policies and practices in a quest for better student outcomes, it is unclear whether performance funding improves student outcomes and can avoid major negative side effects. To address these issues, we draw on empirical research that we have conducted and on studies conducted by other scholars. We conclude by advancing policy recommendations for addressing the implementation obstacles and unintended impacts associated with performance funding.

Leadership in Practice,” can be found in the summer 2013 issue of The Annuals of the Next Generation. Hana Lahr is a doctoral student and senior research assistant at the Community College Research Center, Teachers College, Columbia University. Recent publications include a CCRC working paper, titled “Envisioning Performance Funding Impacts: The Espoused Theories of Action for State Higher Education Performance Funding in Three States.” Rebecca S. Natow is a postdoctoral research associate with the Community College Research Center at Teachers College, Columbia University. She recently coauthored articles published in the Teachers College Record on the politics of higher education performance funding policies. Her dissertation examined the politics of the federal higher education rulemaking process. Lara Pheatt is a doctoral student and senior research assistant at the Community College Research Center at Teachers College, Columbia University. Her publications include “Reshaping the College Transition: Early College Readiness and Transition Curricula in Four States” and “Envisioning Performance Funding Impacts: The Espoused Theories of Action for State Higher Education Performance Funding.” Vikash Reddy is a senior research assistant at the Community College Research Center and a doctoral candidate in education policy at Teachers College, Columbia University. He is a member of the American Educational Research Association and a Teach for America alumnus. He is also the coauthor (with Kevin Dougherty) of Performance Funding for Higher Education. NOTE: We wish to thank Lumina Foundation for its support for the research we conducted and report here. It is not responsible for any views expressed in this article. We wish to thank Laura Perna and Michael McLendon for their very careful reading of and insightful comments on this article.

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

Performance Funding for Higher Education

165

The Origins and Forms of Performance Funding Whether taking the form of formula funding or base plus/minus funding, state funding for higher education traditionally has been driven by input measures such as student enrollments and process measures such as instructional costs and size of institutional plant (Lingenfelter 2008; Parmley et al. 2009). Performance funding differs in that it ties funding (via a formula) to institutional outcome variables such as graduation numbers, job placement rates, and indicators of intermediate progression such as student retention and completion of developmental education (Burke 2002a, 2005; Dougherty and Reddy 2013). The first wave of performance funding adoption began in Tennessee in 1979, and ended in 2000, when a recession struck and caused a sharp decline in new program adoptions and the discontinuation of many existing programs. It seemed as if performance funding had been but a passing fancy. However, beginning in 2007, another wave of performance funding arose. The first-wave programs typically involved a bonus for higher education institutions above base state funding. The performance funding bonus was relatively small, between 1 and 6 percent of base state funding for public higher education (Dougherty and Reddy 2013; Dougherty et al. 2013). This policy design has been dubbed “performance funding 1.0” (see Dougherty and Reddy 2013). The second wave of performance funding began in 2007 with two-thirds of these new programs being re-adoptions of discontinued first-wave programs. In about two-fifths of these wave 2 programs, performance funding typically did not take the form of a bonus on top of regular state funding. Rather, performance funding dollars were embedded in the base state funding for higher education. This has been dubbed “performance funding 2.0” (Dougherty and Reddy 2013). The proportion of state funding tied to performance outcomes in wave 2 programs usually is higher—sometimes much higher—than that for wave 1 programs: typically, 5 to 25 percent of total state funding for public higher education. For Tennessee’s wave 2 program, student outcome indicators drive as much as 85 to 90 percent of state funding for public higher education operations (Dougherty and Natow forthcoming; Dougherty and Reddy 2013; Tennessee Higher Education Commission 2011, 2012). What social, economic, and political factors led states to adopt performance funding? To address this question, we distinguish between the origins of the first wave (1979 to 2000) and second wave (2007 to present) of performance funding adoptions.

The first wave To explore the origins of wave 1 programs, we draw primarily from four studies: Dougherty et al. (2013); Dougherty and Natow (forthcoming); McLendon, Hearn, and Deaton (2006); and Gorbunov (2013). McLendon and colleagues (2006) conducted an event history analysis of the predictors of the adoption of performance funding across the United States between 1979 and 2002. Gorbunov

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

166

THE ANNALS OF THE AMERICAN ACADEMY

(2013) also conducted an event history analysis that examined the predictors of adopting performance funding between 1979 and 2009. In the discussion that follows, we focus on Gorbunov’s findings about initial establishment of performance funding programs—which largely occurred in the years 1979 to 2000— and not on his analysis of re-adoptions, which typically came later. Dougherty et  al. (2013) analyzed the origins of performance funding in Florida, Illinois, Missouri, South Carolina, Tennessee, and Washington between the years 1979 and 1997. The study draws on data from more than 140 interviews with numerous political actors in the six states, including state and local higher education officials, governors and gubernatorial staff, legislators and legislative staff, business leaders, scholars, and consultants. The analyses by Dougherty and colleagues (2013) were guided by the advocacy coalition framework (Sabatier and Weible 2007), policy entrepreneurship theory (Kingdon 1995; Mintrom and Norman 2009), and policy diffusion theory (Berry and Berry 2007). According to the advocacy coalition framework, policy change is driven by coalitions involving actors both inside and outside of government who are drawn together by shared “deep core” beliefs about important social values, the proper role of government, and the significance of different social groups, as well as “policy core beliefs” about the seriousness of a particular social problem, its causes, and its most appropriate solutions (Sabatier and Weible 2007). Dougherty and colleagues (2013) found two main political coalitions driving the adoption of wave 1 performance funding in the six selected states. The first consisted of state higher education coordinating boards and public higher education institutions (Dougherty et al. 2013). The primary aim of this coalition was to secure new state funds for higher education in a time of increased resistance to taxation and criticism of higher education’s effectiveness and efficiency. In four of the six cases of performance funding adoption, state higher education officials (especially coordinating board officials) were noteworthy supporters of performance funding. In three of those states, the officials were the primary policy entrepreneurs. All six of the study states that adopted performance funding had state coordinating boards and sectoral or institutional (but not consolidated) governing boards. This is in agreement with Gorbunov’s (2013) finding that the presence of a coordinating board is a positive predictor of adoption of performance funding and McLendon, Hearn, and Deaton’s finding (2006) that the absence of a consolidated governing board is a significant predictor.1 The second main advocacy coalition in favor of wave 1 performance funding programs consisted of legislators, governors, and state business leaders. The key actors were legislators, particularly Republicans, who saw government as often inefficient and benefiting from the injection of more business-like methods (Dougherty et al. 2013; see also Burke 2002a). McLendon, Hearn, and Deaton (2006) and Gorbunov (2013) also found that the percentage of a state’s legislature that is Republican is a significant predictor of the adoption of performance funding. Although state legislators were the most vigorous actors within this second advocacy coalition, business exerted substantial indirect influence by advocating for more accountability of government entities through the employment of Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

Performance Funding for Higher Education

167

business methods and market controls. This demand by the business community contributed to an ideological environment that rendered performance funding an appealing option for many elected officials (Dougherty et al. 2013; also see Dougherty 1994 on state relative autonomy theory). The advocates of performance funding in the six states studied by Dougherty et al. (2013) received ideas about performance funding from several sources, including previous policymaking efforts within their states, similar policies in other states, discussions within national policy organizations, and recommendations from external consultants. Policy-makers were clearly attuned to the behavior of other states, as they frequently mentioned Florida, Missouri, Ohio, and Tennessee as examples of states in which like policy activity had occurred. As found by McLendon, Hearn, and Deaton (2006) and Gorbunov (2013), the state models mentioned were not necessarily neighboring states. For example, state policy-makers in Illinois mentioned Florida, Ohio, Tennessee, and Missouri (Dougherty et al. 2013). Factors other than contiguity appear to be influencing state diffusion, including ideological/cultural affinities and prominent mention by national policy organizations (for more, see Dougherty and Natow forthcoming). National policy organizations such as the National Conference of State Legislatures and the Southern Regional Education Board influenced policymakers in two of the states (Dougherty et al. 2013; see also Balla 2001; McLendon, Hearn, and Deaton 2006). Outside consultants played an influential role in four of the six states. In Illinois, for example, an external consultant worked closely with the director of the Illinois Advisory Committee on a performance-based incentive system. This consultant had played an important role in the development of performance funding in Tennessee and consulted with several other states (Dougherty et al. 2013). Certain circumstances provided political openings—termed “policy windows” by the policy entrepreneurship approach (Kingdon 1995; Mintrom and Norman 2009) and “shocks” by the advocacy coalition framework (Sabatier and Weible 2007)—for performance funding to be put on the decision agenda for state governments. In five of the six states that Dougherty et al. (2013) studied, a change in government control—usually involving Republicans winning control of either chamber of the legislature or the governorship—provided an opening to promote the concept of higher education performance funding. For example, in Washington, state-level elections in 1996 gave Republicans control of the state House of Representatives. This new Republican majority—much like the Republican majority that arose in Congress following the 1994 election—favored government spending cuts, tax reductions, and more accountability. Another opening came with the spread of an antitax mood in Florida, Missouri, Tennessee, and Washington that made it expedient to defend more spending for higher education by tying it to increased accountability (Dougherty et al. 2013). At the same time, however, event history analyses have shown that other circumstances— including changes in economic growth rates, rises in public tuition levels, and proximity of legislative or gubernatorial elections—are unrelated to performance funding adoption (Gorbunov 2013; McLendon, Hearn, and Deaton 2006). Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

168

THE ANNALS OF THE AMERICAN ACADEMY

The second wave To identify the origins of the second wave of performance funding adoptions, we draw on a study examining the adoption of new performance funding programs in Indiana (2009), Ohio (2009), and Tennessee (2010) (Dougherty, Natow et al. 2014). In all three cases, the programs have a performance funding 2.0  design, where performance funding metrics are embedded in base state ­funding. The study drew principally on interviews with more than fifty political actors, including state higher education officials, senior administrators at higher education institutions, governors and advisors, legislators and staff, business leaders, consultants, and researchers. The origins of these wave 2 programs were markedly different from those of the wave 1 programs in four regards: the role of governors; the influence of outside actors; the motivation of the state higher education coordinating boards; and the impact of the economy. While state higher education boards continued to play an important role, the role of governors was considerably greater than in the case of the wave 1 adoptions (Dougherty, Natow et al. 2014). Governors set the agenda by requesting proposals from the coordinating boards for new approaches to state funding of higher education and by strongly endorsing the responses they received. For example, Tennessee Governor Phil Bredesen (Democrat 2003– 2011) catalyzed the state’s adoption of a performance funding 2.0 program when he asked the Tennessee Higher Education Commission (THEC) for ideas to be fed into a special legislative session he was calling on education policy reform. As it happened, the THEC had already been considering increasing the proportion of state funding tied to performance indicators and moving those indicators into the base state formula for funding higher education. The governor took up THEC’s proposal and secured passage in 2010 of the Complete College Tennessee Act that mandated the development of a new state funding formula (Dougherty, Natow et al. 2014). Party affiliation may have played a role in the development of the wave 2 programs, particularly those taking a performance funding 2.0 form. Since 2007, performance funding programs have been adopted (or re-adopted) by at least nine states that had Republican governors at the time (Arizona, Indiana, Louisiana, Mississippi, Nevada, New Mexico, North Dakota, Pennsylvania, and Texas) and four states with Democratic governors at the time (Massachusetts, Minnesota, Ohio, and Tennessee). Policy and philanthropic organizations external to a given state appear to have been important sources of ideas for wave 2 programs. National policy organizations such as Complete College America and philanthropic organizations such as the Lumina and Gates Foundations played a considerable role in the development of wave 2 programs (Dougherty et al. 2013; Dougherty, Natow et al. 2014; Dougherty and Natow forthcoming; Lumina Foundation 2011; Social Program Evaluators and Consultants [SPEC] Associates 2012a, 2012b). For example, in Tennessee, Lumina’s Making Opportunity Affordable initiative (later renamed “College Productivity”) provided important support for the reform of

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

Performance Funding for Higher Education

169

performance funding (Dougherty, Natow et al. 2014; SPEC Associates 2012b; Tennessee Higher Education Commission 2008). A third difference in origins between the wave 1 and wave 2 programs concerns the views of the state higher education boards. These boards were major actors in the development of both the wave 1 and wave 2 programs in Ohio and Tennessee and the wave 2 program in Indiana (Dougherty et al. 2013; Dougherty, Natow et al. 2014). At the time the wave 1 programs were developed in Ohio and Tennessee, the state coordinating boards saw performance funding primarily as a new source of public funding for higher education in a time of budgetary stringency (Dougherty et al. 2013; Moden and Williford 2002). When the wave 2 programs were adopted in these states, the state higher education boards were concerned more with securing greater responsiveness from higher education institutions in the face of growing demand from governors, legislators, business, and the coordinating boards themselves for more college graduates and more efficiency in the use of state revenues (Dougherty, Natow et al. 2014). Also important to the adoption of performance funding in Indiana, Ohio, and Tennessee was the Great Recession of 2007 to 2009 and its aftermath (Dougherty, Natow et al. 2014). In all three states, but particularly in Indiana and Ohio, the Great Recession provided perhaps the key policy window for action. An Indiana state higher education official noted: When we got to 2009, we hit the wall on revenues. There was no “new money” to give. In fact, we were in the process of cutting money to state agencies, but the Commission [for Higher Education] really wanted their performance metrics to matter. And so that’s when they got into the base and said, “We’re going to reallocate some percentage of the base and use that to distribute money under the performance funding [program].” (quoted in Dougherty, Natow et al. 2014, 35)

Performance Funding Discontinuation Despite the proliferation of performance funding programs, two-thirds of the states that have adopted performance funding have later discontinued the policy, at least for some period of time. This pattern suggests that—however powerful its support—performance funding faces some important impediments. A number of studies have examined the discontinuation of performance funding programs (Burke 2002b, 2002c; Dougherty, Natow, and Vega 2012; Dougherty and Natow forthcoming; Gorbunov 2013). We focus here on the Dougherty, Natow, and Vega (2012) study of three states that dropped performance funding for a time (Florida, Missouri, and Washington) and a fourth state (Tennessee) that has retained it for more than 30 years. Dougherty, Natow, and Vega (2012) identified three forces contributing to the abandonment of performance funding: opposition from higher education institutions, economic recession, and loss of political champions. Opposition from higher education institutions reflected their perceptions that they were not sufficiently consulted in developing the policies, the performance indicators used

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

170

THE ANNALS OF THE AMERICAN ACADEMY

were not valid, high costs of compliance were imposed on colleges, and campus autonomy was diminished. An important source of institutional unhappiness with performance funding in Washington and Florida was the practice of financing performance funding by holding back a percentage of state appropriations for higher education institutions; institutions had to earn back the holdbacks through enhanced performance. This approach left institutions uncertain about how much of the holdback they would eventually recover. The recessions of the early and late 2000s played a major role in the discontinuation of performance funding in Missouri and Florida. As state funding for higher education was cut, colleges and universities attempted to protect core state funding. Because performance funding at that time largely took the form of a bonus to base state funding, institutions called for abandoning performance funding and minimizing as much as possible cuts in base state funding. A final factor contributing to program discontinuation was the loss of program champions. Legislative supporters left leadership positions or even public office entirely, sometimes due to term limits. Governors who advocated for performance funding were succeeded by new governors who had different concerns. And business, which had supported performance funding, became less energetic in its support as time went on (Dougherty, Natow, and Vega 2012; see also Burke 2002b, 2002c). One seemingly consistent factor working against discontinuation was Republican legislative presence. Gorbunov’s (2013) event history analysis found that the higher the percentage of Republicans in the state legislature, the lower the likelihood of termination of performance funding.

Implementing Performance Funding: Policy Instruments Policy-makers can and do use a wide variety of policy instruments2 to secure the acquiescence of their policy targets, whether they be implementing organizations or eventual clients such as students (Anderson 2011; McDonnell and Elmore 1987). What policy instruments have state officials used to secure the intended effects of performance funding, and how have colleges experienced these policy instruments? Have the policy instruments effectively promoted the implementation of performance funding? To address these questions, we draw on a study conducted in three states (Indiana, Ohio, and Tennessee) of what policy instruments the advocates and implementers of performance funding intended to use and how institutions experienced those instruments (Dougherty, Jones et al. 2014; Reddy et al. 2014). The study draws on data collected from interviews in Indiana, Ohio, and Tennessee with more than fifty state-level political actors (including state higher education board officials, legislators and staff, governors and advisors, business leaders, consultants, and researchers) and 110 senior and middle administrators and department chairs at nine community colleges in the three states.3 In all three states, state-level advocates and implementers of performance funding focused most strongly on financial incentives as the means to secure the

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

Performance Funding for Higher Education

171

intended goals of performance funding (Dougherty, Jones et al. 2014). This comment by a state higher education official in Indiana is typical of statements from state-level advocates: The state wants higher graduation rates, the state wants more research dollars coming in, the state wants a more efficient higher ed system, and so they would say, “If you do these things that align with our policies, then we will try and get you some more money for doing that.” It’s a simple financial incentive model. (quoted in Dougherty, Jones et al. 2014, 13)

The community college respondents were very aware of the financial incentive; it was the policy instrument they most frequently mentioned (Reddy et al. 2014). Another policy instrument that received substantial attention from state-level advocates and implementers of performance funding was informing college officials about the goals and methods of performance funding to secure institutional understanding and buy-in (Dougherty, Jones et al. 2014). These state efforts to communicate information about the goals and methods of the performance funding program were fairly successful in reaching senior administrators at the colleges, particularly in Tennessee (Reddy et al. 2014). A senior administrator at a Tennessee institution noted: [It’s] through these various committees that they attempt to communicate all of this, and it’s also very well-documented out on their website. … One of the policy administrators at the Higher Education Commission was holding a fact-finding set of meetings … and he made clear that their goal was to shift funding from inputs to outputs. And this was all before the new funding formula was hammered out. In my opinion, it’s been quite transparent.4

While this outreach effort was relatively successful in reaching senior administrators at the colleges, particularly in Tennessee, the study also found substantial gaps in awareness of the goals and methods of the state’s performance funding program within colleges in all three states. Faculty were much less aware than were senior administrators (Reddy et al. 2014; see also Burke 2002a). Performance funding advocates in the three states also tried to influence the colleges by informing them and the public about how the colleges were doing on the state performance indicators (Dougherty, Jones et al. 2014; Fingerhut 2012). For example, an Indiana state higher education official noted: Graduation data was much, much more important to the Commission than it was to anybody else. And so we would put that together and share it with institutions and encourage them to share it with their boards, to share it with their faculty. Some did; some didn’t. It’s really important but difficult thing to do is to get buy-in fairly deep in the whole system. (quoted in Dougherty, Jones et al. 2014, 19–20)

Typically, the intended targets were senior administrators in the colleges and universities, rather than faculty or middle-level administrators (Dougherty, Jones et al. 2014; Reddy et al. 2014). For example, in Tennessee, a senior community college administrator noted: “THEC does do that [report information on how the

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

172

THE ANNALS OF THE AMERICAN ACADEMY

college is performing versus other colleges] and they send that information to the president, the vice presidents. But I don’t know that it goes to the college in general.”5 Finally, building up the capacity of colleges to respond effectively to performance funding could be an important adjunct to other policy instruments for securing performance funding effects. States could remove obstacles to effective responses by colleges by providing funding and assistance to improve institutional research offices, train faculty and staff to analyze student outcome data, and try out new academic or student programs (Dougherty and Reddy 2013; see also Jenkins 2011; Witham and Bensimon 2012). However, state-level advocates and implementers of performance funding in Indiana and Ohio (Tennessee is somewhat of an exception) only weakly espoused building institutional capacity as a means to make performance funding work more effectively (Dougherty, Jones et al. 2014). An Indiana state legislative official noted his lack of interest in building capacity: It’s just like any other business—we don’t think that we need to give them money to, for example, come up with a plan to do what they ought to already be doing … we’re just assuming that they’re refocusing their mission statements and their goals and objectives so that they can come in compliance with this. (quoted in Dougherty, Jones et al. 2014, 23)

As a result, respondents at colleges report little capacity building support from the state (Reddy et al. 2014). The weak efforts to build the capacity of colleges to respond to performance funding in these three states are noteworthy. As discussed below, lack of institutional capacity to collect and analyze fine-grained student data was cited by college officials as an important impediment to effectively responding to performance funding demands.

Institutional Changes This section summarizes evidence on the impact of performance funding on institutional policies, programs, and practices, drawing on the review by Dougherty and Reddy (2013) of sixty studies of performance funding impacts in eight states, and the study by Natow et al. (2014) on how public colleges and universities responded to performance funding universities in three states (Indiana, Ohio, and Tennessee). This research suggests that performance funding programs do lead public colleges and universities to make changes to their academic and student services policies, programs, and practices with the hope of improving their performance on state funding metrics. In response to performance funding, colleges report closing programs with low graduation and job placement rates and dropping required courses that are deemed as unnecessarily hindering graduation. Illustrating a change in degree requirements, some Florida community colleges removed apparent graduation obstacles, such as graduation fees and courses that

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

Performance Funding for Higher Education

173

students found hard to pass. As noted below, these changes in degree requirements could remove unnecessary obstacles but also weaken academic standards (Dougherty and Reddy 2013). Colleges also reported modifying course content and instructional techniques with hopes that students would be more likely to complete courses and graduate. Changes have been particularly common in the areas of developmental education (Natow et al. 2014). For example, in response to performance indicators rewarding student movement from remedial education to regular college programs, several community colleges in Washington revamped their developmental education programs by introducing more developmental math course offerings and providing release time for faculty to develop new strategies for developmental math (Dougherty and Reddy 2013). Studies of performance funding impacts have also found changes in institutional student support policies, programs, and practices in response to performance funding, including modifications in advising and counseling, retention programs, registration procedures, and student aid (Dougherty and Reddy 2013; Natow et al. 2014). Frequent changes have been made to develop student success courses and centers to aid the transition and integration of first-year students and to electronic advising systems to monitor student progress and mobilize support for students who are not attending class or falling behind in coursework (Dougherty and Reddy 2013; Natow et al. 2014).

Student Outcomes Despite their prevalence, it is not yet clear that performance funding programs yield improvements in student outcomes. To be sure, a number of studies have reported that student outcomes such as graduation numbers have risen sharply after the introduction of performance funding in various states (Dougherty and Reddy 2013; Postsecondary Analytics 2013). However, determining the impact of performance funding on student outcomes is difficult. Even if student outcomes improve after the introduction of performance funding, these improvements could be due to a host of other factors such as rising enrollments (which would also produce rising graduation numbers), changes in state tuition and financial aid policies, other efforts to improve student outcomes (such as initiatives by regional accrediting associations), or institutional decisions to avoid admitting less-qualified students. Multivariate studies with extensive controls for these other factors largely fail to find evidence that performance funding improves graduation and retention (Dougherty and Reddy 2013; also see Rutherford and Rabovsky, this volume). One partial exception is a study by Tandberg and Hillman (2014). Using a difference-in-differences design to compare states with and without performance funding, the authors controlled for various higher education characteristics (e.g., percent enrolled in the public four-year sector, average in-state tuition for public four-year and two-year institutions, state aid per public FTE student, state appropriations per FTE student) and various state socioeconomic characteristics (e.g.,

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

174

THE ANNALS OF THE AMERICAN ACADEMY

state population, unemployment rate, per capita GDP, and share of the adult population with a baccalaureate degree or higher). The authors found no impact of performance funding on changes between 1990 and 2010 in the number of baccalaureate degrees awarded in six comparisons involving lagged and nonlagged effects and three different comparison groups of states without performance funding: all states, contiguous states, and states with coordinating/ planning boards (the type most common among performance funding states). However, the authors did find that performance funding had a positive impact on bachelor’s degree production 7, 8, and 11 years after the performance funding programs were established in the few states that had programs lasting that long. In a related study, Hillman, Tandberg, and Gross (forthcoming) studied the impact of performance funding in Pennsylvania. Using a difference-in-differences approach to compare the change in number of baccalaureate degrees conferred by public institutions in Pennsylvania in a given year to the change in baccalaureate conferrals by similar institutions in other states that did not have performance funding, they estimate ten models involving lagged and nonlagged dependent variables and five different comparison groups. Controlling for a variety of institutional characteristics (e.g., the percent minority of the student body, federal student aid per FTE student, and educational and general expenditures per FTE student), the analyses show impacts of performance funding in three of ten multivariate models. However, Hillman, Tandberg, and Gross (forthcoming) regard these models as less definitive than four others they estimate involving sets of institutions matched using coarsened exact matching on pretreatment year patterns (a form of quasi-experimental research design). Performance funding had no impact in these four models. Hillman, Tandberg, and Gross (forthcoming) conclude that performance funding in Pennsylvania has not produced more baccalaureate completions. Five other studies using multivariate analyses of baccalaureate completions at public four-year colleges also found no effect of performance funding (Fryar 2011; Larocca and Carr 2012; Sanford and Hunter 2011; Shin 2010; Shin and Milton 2004). These studies analyze public four-year graduation rates over the course of six years from college entry, with four using the Integrated Postsecondary Education Data System and other national data and one using Tennessee data. The studies compare rates in states with and without performance funding using a variety of statistical techniques (e.g., hierarchical linear modeling) and controlling for institution fixed effects (e.g., Carnegie classification, percentage of students receiving Pell grants, mean entrance test scores, and instructional expenditures per student). Some also control for state characteristics such as state higher education appropriations and the state unemployment rate (see Dougherty and Reddy 2013, Table A2). Another multivariate study found that performance funding had no effect on the number of associate degrees awarded by community colleges between 1990 and 2010 (Tandberg, Hillman, and Barakat, forthcoming). Using a difference-indifferences design to compare states with and without performance funding, the authors controlled for essentially the same variables as Tandberg and Hillman (2014). Although performance funding was unrelated to associate Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

Performance Funding for Higher Education

175

degree production in the aggregated analyses, Tandberg, Hillman, and Barakat (forthcoming) found some differences by state: performance funding had a positive effect in four states, negative impact in six states, and mixed or no effect in nine states. Although most multivariate studies of performance funding impacts have focused on degree completion, a few (like Rutherford and Rabovsky, this volume) also consider retention. One study found that two-year institutions in states with performance funding had higher rates of one-year retention than institutions in states without performance funding (Larocca and Carr 2012). However, that same study and another two (Huang 2010; Larocca and Carr 2012; Sanford and Hunter 2011) found no effect of performance funding on retention in public four-year colleges. The first two of these studies used national data, while the last used Tennessee data. The studies controlled for many of the same institutional variables, including undergraduate enrollments and student composition (e.g., percentage part time, minority, or receiving Pell grants). Huang (2010) also controlled for two state-level variables: personal income per capita and the state unemployment rate. (For more on these studies, see Dougherty and Reddy 2013, Table A2). Overall, available research suggests that the changes observed above in campus academic and student support policies have not resulted in improved graduation and retention numbers, except perhaps in certain states and when performance funding has been operating for several years. However, the multivariate studies discussed above primarily examined wave 1 performance funding programs (which did not provide much funding), and not the wave 2 programs that often provide considerably more performance funding. Moreover, even when the studies included a few wave 2 programs, these programs largely had not yet been fully implemented. Before we can reach a definitive conclusion about performance funding impacts, analyses are needed of the much more intensive wave 2 programs in states such as Ohio and Tennessee, particularly after they were fully phased in and at the height of possible impact.

Obstacles to the Effectiveness of Performance Funding The lack of evidence that performance funding programs improve student outcomes may be attributable to the low levels of funding for these programs (see below for more). The lack of impact may, however, also be attributable to obstacles colleges encounter in attempting to respond to performance funding demands. Based on interviews with 110 senior and middle-level administrators and department chairs in nine community colleges in three states (Indiana, Ohio, and Tennessee), Pheatt et al. (2014) found that the most frequently reported obstacles were student characteristics (particularly lack of preparation for college), inadequate institutional capacity, inappropriate performance funding measures, insufficient state financial support for programmatic innovations, and institutional resistance.

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

176

THE ANNALS OF THE AMERICAN ACADEMY

Compared with four-year colleges, community colleges tend to enroll students who are less prepared for college and have greater obligations to family and work (Bailey 2009; Dougherty 1994). According to many interviewees, this reality leads to increased need for remediation and greater difficulties in retaining and graduating students (Pheatt et al. 2014). An Indiana administrator spoke to this point: I think the greatest obstacle is trying to account for and accommodate the fact that we have so many students that come in at the remedial level. … [S]o as long as we have such a large number of students coming in at that level, we will struggle with getting students out in two years because they would have to go to school year round.6

The community college respondents commonly felt that such performance metrics as retention rates or graduation numbers did not properly measure success at community colleges and thereby limited the ability of their institutions to perform well on state indicators. Interviewees argued that focusing on graduation numbers minimizes the core mission of community colleges, which is to help students fulfill personal learning objectives that may not include earning a degree. A high-level administrator in Tennessee noted: I think the biggest obstacle we have is that a lot of our students, again, they come here for reasons that are different than what the formula measures. I mean some students come here only to take a course or two to improve themselves in a particular area.7

Insufficient organizational capacity to address performance funding demands was often cited as an obstacle by respondents in the studies in Indiana, Ohio, and Tennessee (Pheatt et al. 2014). Interviewees pointed to two aspects of organizational capacity: lack of access to necessary data and insufficient analytic capacity. College administrators in Ohio and Tennessee mentioned lack of access to relevant data as an obstacle to the college’s ability to respond effectively to the new formula (Pheatt et al. 2014). A senior administrator in Ohio stated that the college’s databases do not provide straightforward ways of assessing student progress according to the state performance funding metrics: I think most of these data systems were designed around registration as opposed to analytics that can regularly be used by colleges. So, for example, the whole process of tracking a student’s progress through the college, doing that in the aggregate to see where there are the drop-off points, after thirty credits or whatever, is really not as straightforward as I think it should be. (quoted in Pheatt et al. 2014, 14)

A more fundamental problem, however, is inadequate capacity to analyze data, even when available. Several college administrators and faculty members pointed to insufficient institutional research capacity, particularly with regard to identifying gaps in student outcomes, determining their causes, and devising solutions (Pheatt et al. 2014). When asked about obstacles to addressing state performance funding demands, a senior administrator at a Tennessee college noted: “One of the greatest obstacles is the ability to look back at your data. … Sometimes it’s Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

Performance Funding for Higher Education

177

hiring the personnel to be able to determine, ‘Okay, here are the problems that we weren’t aware of to begin with.’”8 Related to perceptions of insufficient institutional capacity is a perception that colleges do not receive sufficient funding from the state government to mount new programs to improve student outcomes. State demands are rising but state funding is not keeping pace, leaving institutions with little fiscal slack with which to innovate (Pheatt et al. 2014). A senior administrator from Ohio noted: We all know lots of things that will impact success and they’re easy to demonstrate within a cohort of fifty or a hundred students. To scale it up to 10,000 students however, we don’t have the resources to do that. … The challenge that all of us are facing is, at the same time that the state is saying to us, “We want different outcomes, we want improved outcomes,” the state’s also been, over the past decade, reducing the amount per student that it’s providing to us.9

A final obstacle mentioned with some frequency was institutional resistance, due to faculty fears that academic standards would be weakened and faculty members’ resistance to change (Pheatt et al. 2014). With regard to faculty fear of weakened academic standards, a senior administrator in Tennessee noted: I’m closer to faculty than I am to other groups. And, again, the reason for their objections is because they’re concerned that they’re going to be expected to pass more students in order to raise the numbers. (quoted in Pheatt et al. 2014, 19)

Unintended Impacts As with any public policy, performance funding may produce outcomes that are not intended by policy designers. Some unintended outcomes may result from attempts by institutions to meet performance funding mandates. When institutions cannot use legitimate means to attain socially expected goals, they may resort to illegitimate means (see Merton 1968; Mica, Peisert, and Winczorek 2012). In their review of the performance funding literature, Dougherty and Reddy (2013) found references to such unintended impacts as restriction of student admissions, grade inflation and weakened academic standards, unanticipated and unacknowledged costs of compliance, and narrowed campus missions. In our study of the implementation of performance funding at nine community colleges in three states (Indiana, Ohio, and Tennessee), we asked 110 senior and midlevel administrators and department chairs about unintended consequences that their colleges had encountered in the process of responding to the state’s performance funding demands (Lahr et al. 2014). The unintended impacts most commonly mentioned were weakened academic standards and restriction in admissions. These unintended impacts were largely stated as possibilities, perhaps because the wave 2 programs were still being phased in at the time of the study, but statements of already present unintended impacts were not infrequent (Lahr et al. 2014).

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

178

THE ANNALS OF THE AMERICAN ACADEMY

The most commonly mentioned unintended impact of performance funding was restricted admission of less-prepared (and less advantaged) students (Lahr et al. 2014). While more selective colleges may not perceive this change as unwelcome, a decided move to greater admissions selectivity could gravely undermine the open-access mission of community colleges and broad-access four-year institutions. Again, this unintended result was typically stated as a potential impact, although one that might already be taking form. For example, a senior administrator at an Ohio community college stated that the state’s new performance funding program was encouraging colleges to consider restricting their opendoor admissions: Oh, there’s no question that that conversation is already happening; we’ve already had it ourselves … we just started to wonder does the new performance-based funding sort of move us away from total access and say, “Listen, maybe we shouldn’t be out there trying to recruit the students with the heaviest needs; maybe we ought to be trying to recruit the students that have the highest chances of success.” … And you might find that some institutions might start moving away from recruiting some of the weaker students.10

The second most frequently cited unintended impact of performance funding across the nine community colleges was a lowering of academic standards. Interviewees feared that academic standards could be weakened through grade inflation, movement of students too quickly through developmental education, and degree requirements becoming less rigorous (Lahr et al. 2014). A senior administrator in Indiana forecasted that performance funding demands might lead to grade inflation: “It’s putting faculty in a position of, you know, it looks to them like the easiest way out is to lower the standards and get people through. And so it’s something that’s of great concern I think” (quoted in Lahr et al. 2014, 17). Similarly, college respondents in Ohio and Tennessee reported feeling pressure from their institutional leadership to shorten the amount of time students spent in developmental education to more quickly move them toward graduation. Respondents feared that this time reduction would come at the expense of student needs (Lahr et al. 2014). Finally, respondents in all three states reported pressure to “streamline” requirements so that students could graduate more quickly. While this could be desirable, several respondents feared that necessary courses were being sacrificed. In Indiana, several interviewees discussed an Ivy Tech system-wide initiative to ensure that, once students obtain sixty credits, they were awarded an associate degree, regardless of whether they had switched majors or concentrations (Lahr et al. 2014).

Policy Recommendations Based on the finding that there is a lack of apparent impact of performance funding on student outcomes to date, a clear policy recommendation would be to sharply increase the amount of performance funding. One possible reason that

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

Performance Funding for Higher Education

179

multivariate analyses discussed above reveal little impact of performance funding on student outcomes may be the small proportion of state funding awarded in most states based on outcomes indicators. This conclusion would suggest the value of sharply increasing the proportion of state funding that is driven by performance indicators. This, of course, is precisely what an increasing number of states have been doing in recent years in their wave 2 adoptions. Whether such an increase will indeed improve student outcomes is not yet known. No multivariate studies as yet have examined the impacts of programs that devote a large share of state finance of public higher education to performance funding. Moreover, even with sharply increased funding, performance funding programs must still address the sizable implementation obstacles discussed above and minimize the unintended impacts that result from performance funding. Given this, we offer several recommendations for addressing these issues.

Reducing obstacles to performance funding effectiveness To reduce the obstacles colleges encounter in responding to performance funding, states should consider improving the capacity of colleges to engage in organizational learning—providing funds and assistance for organizational innovation and addressing institutional resistance to performance funding—and creating better performance indicators and measures (Dougherty and Reddy 2013). The obstacles colleges have encountered in collecting and analyzing finegrained data on student progression through college suggest the need for financial and technical support from states to improve institutional research systems. Colleges need information technology systems that are designed to measure student progression and are not just repurposed registration systems. They also need enlarged and better-trained institutional research offices that can conduct sophisticated analyses, train faculty and staff in how to do the same, and evaluate institutional programmatic changes. More generally, colleges need help to develop the structural, cultural, and psychological prerequisites for effective organizational learning: that is, continually monitoring their performance, identifying problems, devising strategies to resolve them, and evaluating how well those strategies work (Dowd and Tong 2007; Jenkins 2011; Kerrigan 2010; Lipshitz, Popper, and Friedman 2002; Witham and Bensimon 2012; see also Argyris and Schön 1996). To meet institutions’ need for resources for program innovations to improve student outcomes, states should consider establishing competitive institutional aid programs dedicated to improving student outcomes. Funding should be provided to help colleges to implement new academic and student-support policies, programs, and practices intended to improve institutional performance on state metrics. To reduce institutional resistance to performance funding systems, states need to emulate Tennessee and Ohio in involving institutional personnel—not just senior administrators but also faculty and staff—in the design of performance funding programs. By bringing institutional personnel into the design process, institutional personnel are more likely to see the final product as manifesting

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

180

THE ANNALS OF THE AMERICAN ACADEMY

institutional ideals and professional input and not as a top-down imposition (Moynihan 2008). Finally, to address criticisms that performance funding indicators are not tailored to college missions, states should consider defining somewhat different indicators for community colleges, broad-access public universities, and research universities (as Tennessee and Ohio do now). For community colleges and broadaccess public universities, states should consider analyzing graduation measures only for students who intend to earn a degree (Bailey 2011; Committee on Measures of Student Success 2011). Moreover, to properly capture completions by community college entrants, states should pair indicators of graduation and transfer, given the many community college students who transfer to a four-year college without first getting a community college degree (Committee on Measures of Student Success 2011).

Reducing unintended impacts Policy-makers must also make sure that any positive effect of performance funding programs on student outcomes does not come with sizable unintended impacts. Policy-makers particularly need to protect academic standards and reduce the temptation to restrict admissions of less-prepared (and often lessadvantaged) students, as these actions strike at the heart of the mission of higher education (Dougherty and Reddy 2013). To ensure that academic standards are not lowered, states and institutions can use a number of policy tools. They can survey faculty members to identify instances of pressure to weaken academic standards. They can collect statewide data on degree requirements and course grade distributions to determine whether these have eroded since the adoption of performance funding. Assessments of general student learning can be used to further protect against a weakening of curricular requirements and grading standards. However, these student-learning assessments should be developed in cooperation with faculty, whose participation can help to ensure that these assessments are viewed as instructionally valid and institutionally legitimate (Dougherty and Reddy 2013). Because restricting student admissions is a tempting approach to increasing performance on state metrics, states should take steps to avoid artificial comparisons between institutions that are quite different in their student composition and therefore quite different in their ability to produce high retention and graduation rates. States could compare each college to peer institutions. Another option would be to compare a college’s performance now to its past performance, as is done by the Student Achievement Initiative in Washington (Bailey 2011). In addition, states should consider incentivizing colleges to admit less-advantaged students who are less likely to persist in and graduate from college. Ohio and Tennessee give greater weight in their funding formulas to completions by lowincome students and others at risk of low attainment (Ohio Board of Regents 2013; Tennessee Higher Education Commission 2011).

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

Performance Funding for Higher Education

181

Conclusion Although research on performance funding origins and impacts is accumulating, more research is needed. We need to learn more about the origins and possible discontinuation of the wave 2 performance funding programs. While we have analyses of the origins of wave 2 programs in Indiana, Ohio, and Tennessee, we also need to learn about the social, economic, and political sources of performance funding in other states that have adopted or re-adopted performance funding in the past seven years. Eventually some of those programs may be discontinued. Will this discontinuation be for the same reasons as the wave 1 programs studied by Dougherty, Natow, and Vega (2012)? We also need to know more about the impact of performance funding on institutional and student outcomes. Unlike the wave 1 programs, do the wave 2 programs—particularly those tying a high proportion of state funding to performance indicators (e.g., Ohio and Tennessee)—produce improvements in student outcomes? To what degree are wave 2 programs able to overcome the obstacles to implementing performance funding and minimize its unintended impacts (Dougherty and Reddy 2013; Lahr et al. 2014; Pheatt et al. 2014)? The impact of performance funding programs apparently varies across states (Tandberg, Hillman, and Barakat, forthcoming) but we know little about the sources of variation. What is the impact of differences in program design, state socioeconomic and political conditions, or mix of higher education institutions on the intended and unintended impacts of performance funding? While we wait for more research, much can be done now. We already know a lot about the origins, impacts, and implementation vicissitudes of performance funding. If policy-makers do decide to pursue performance funding, they should use the knowledge we already have to try to maximize the intended benefits and minimize the unintended impacts of their programs.

Notes 1. McLendon, Hearn, and Deaton (2006) attribute the negative association of the presence of a consolidated governing board with performance funding adoption to the tendency of such boards—unlike coordinating boards—to be captured by higher education actors and therefore preferring less intrusive forms of performance accountability (such as performance budgeting and reporting) than performance funding. 2. These are defined as “mechanisms that translate substantive policy goals … into concrete actions” (McDonnell and Elmore 1987, p. 134). 3. The study will eventually include data on nine universities in three states, but data collection and analysis have not been completed as of this writing. 4. Authors’ interview TN CC1 #2. 5. Authors’ interview TN CC2 #1b. 6. Authors’ interview IN CO #3. 7. Authors’ interview TN CC1 #4. 8. Authors’ interview TN CC2 #6. 9. Authors’ interview OH CC2 #1. 10. Authors’ interview OH CC3 #1.

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

182

THE ANNALS OF THE AMERICAN ACADEMY

References Anderson, James. 2011. Public policymaking. 7th ed. Boston, MA: Wadsworth. Argyris, Chris, and Donald A. Schön. 1996. Organizational learning II: Theory, methods, and practice. Reading, MA: Addison-Wesley. Bailey, Thomas. 2009. Challenge and opportunity: Rethinking the role and function of developmental education in community college. New Directions for Community Colleges 145:11–30. Bailey, Thomas. 2011. Developing input adjusted metrics for community college performance. Paper presented at the Context for Success Conference, HCM Strategists LLC, Washington, DC, December 9. Balla, Steven J. 2001. Interstate professional associations and the diffusion of policy innovations. American Politics Research 29 (3): 221–45. Berry, Frances S., and William D. Berry. 2007. Innovation and diffusion models in policy research. In Theories of the policy process, 2d ed., ed. Paul A. Sabatier, 223–60. Boulder, CO: Westview Press. Burke, Joseph C., ed. 2002a. Funding public colleges and universities: Popularity, problems, and prospects. Albany, NY: Rockefeller Institute Press. Burke, Joseph C. 2002b. Performance funding: Easier to start than sustain. In Funding public colleges and universities: Popularity, problems, and prospects, ed. Joseph C. Burke, 219–42. Albany, NY: State University of New York (SUNY) Press. Burke, Joseph C. 2002c. Performance funding: Assessing program stability. In Funding public colleges and universities: Popularity, problems, and prospects, ed. Joseph C. Burke, 243–64. Albany: SUNY Press. Burke, Joseph C., ed. 2005. Achieving accountability in higher education. San Francisco, CA: Jossey-Bass. Committee on Measures of Student Success. 2011. A Report to Secretary of Education Arne Duncan. Washington, DC: U.S. Department of Education. Available from http://www2.ed.gov/about/bdscomm/ list/cmss-committee-report-final.pdf. Dougherty, Kevin J. 1994. The contradictory college. Albany, NY: SUNY Press. Dougherty, Kevin J., Sosanya M. Jones, Hana Lahr, Rebecca S. Natow, Lara Pheatt, and Vikash Reddy. 2014. Envisioning performance funding impacts: The espoused theories of action for state higher education performance funding in three states. New York, NY: Community College Research Center, Teachers College, Columbia University. Available from http://ccrc.tc.columbia.edu/PerformanceFunding.html. Dougherty, Kevin J., and Rebecca S. Natow. Forthcoming. Making Minerva perform: The politics of state performance funding for higher education. Baltimore, MD: Johns Hopkins University Press. Dougherty, Kevin J., Rebecca S. Natow, Rachel H. Bork, Sosanya M. Jones, and Blanca E. Vega. 2013. Accounting for higher education accountability. Teachers College Record 115 (1): 1–50. Dougherty, Kevin J., Rebecca S. Natow, Sosanya M. Jones, Hana Lahr, Lara Pheatt, and Vikash Reddy. 2014. The political origins of “performance funding 2.0” in Indiana, Ohio, and Tennessee: Theoretical perspectives and comparisons to performance funding 1.0. New York, NY: Columbia University, Teachers College, Community College Research Center. Dougherty, Kevin J., Rebecca S. Natow, and Blanca E. Vega. 2012. Popular but unstable: Explaining why state performance funding systems in the United States often do not persist. Teachers College Record 114 (3): 1–41. Dougherty, Kevin J., and Vikash Reddy. 2013. Performance funding for higher education: What are the mechanisms? What are the impacts? ASHE Higher Education Report. San Francisco, CA: JosseyBass. Dowd, Alicia C., and Vincent P. Tong. 2007. Accountability, assessment, and the scholarship of “best practice.” In Higher education: Handbook of theory and research, vol. 22, ed. John C. Smart, 57–120. Dordrecht, the Netherlands: Springer. Fingerhut, Eric. 2012. Ohio’s new performance-based funding system. In Tying funding to community college outcomes: Models, tools, and recommendations for states, eds. David Alstadt, Eric Fingerhut, and Richard Kazis, 5–16. Boston, MA and Washington, DC: Jobs for the Future. Available from http:// www.jff.org/sites/default/files/TyingFunding2CommColleges-042312.pdf. Friedel, Janice N., Zoe M. Thornton, Mark D’Amico, and Stephen G. Katsinas. 2013. Performance-based funding: The national landscape. Tuscaloosa, AL: University of Alabama, Education Policy Center. Available from http://www.uaedpolicy.ua.edu/uploads/2/1/3/2/21326282/pbf_9-17_web.pdf.

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

Performance Funding for Higher Education

183

Fryar, Alisa H. 2011. The disparate impacts of accountability—Searching for causal mechanisms. Paper presented at the Public Management Research Conference, Syracuse, NY, June 2–4. Gorbunov, Alexander. 2013. Performance funding in higher education: Determinants of policy shifts. PhD diss., Vanderbilt University. Hillman, Nicholas W., David A. Tandberg, and Jacob P. K. Gross. Forthcoming. Performance funding in higher education: Do financial incentives impact college completions? Journal of Higher Education. Huang, Ying. 2010. Performance funding and retention rates. Paper presented at the annual meeting of the Association for the Study of Higher Education, Indianapolis, IN, November 17–20. Jenkins, Davis. 2011. Redesigning community colleges for completion: Lessons from research on highperformance organizations. CCRC Working Paper No. 24, Assessment of Evidence Series. New York, NY: Columbia University, Teachers College, Community College Research Center. Kerrigan, Monica R. 2010. Data-driven decision making in community colleges: New technical requirements for institutional organizations. EdD diss., Teachers College, Columbia University. Kingdon, John W. 1995. Agendas, alternatives, and public policies. 2d ed. New York, NY: Longman. Lahr, Hana, Lara Pheatt, Kevin J. Dougherty, Sosanya M. Jones, Rebecca S. Natow, and Vikash Reddy. 2014. Unintended impacts of performance funding on community colleges in three states. New York, NY: Community College Research Center, Teachers College, Columbia University. Larocca, Roger, and Douglas Carr. 2012. Higher education performance funding: Identifying impacts of formula characteristics on graduation and retention rates. Paper presented to the Western Social Science Association Annual Conference, Houston, TX, April 11–14. Lingenfelter, Paul. 2008. The financing of public colleges and universities in the United States. In Handbook of research in education finance and policy, eds. Helen F. Ladd and Edward B. Fiske, 651–70. New York, NY: Routledge. Lipshitz, Raanan, Micha Popper, and Victor J. Friedman. 2002. A multifacet model of organization learning. Journal of Applied Behavioral Science 38 (1): 78–98. Lumina Foundation. 2011. College productivity: Four steps to finishing first. Indianapolis, IN: Lumina Foundation. Available from http://www.luminafoundation.org/publications/Four_Steps_to_Finishing_ First_in_Higher_Education.pdf. McDonnell, Lorraine M., and Richard F. Elmore. 1987. Getting the job done: Alternative policy instruments. Educational Evaluation and Policy Analysis 9 (2): 133–52. McLendon, Michael K., James C. Hearn, and Russ Deaton. 2006. Called to account: Analyzing the origins and spread of state performance-accountability policies for higher education. Educational Evaluation and Policy Analysis 28 (1): 1–24. Merton, Robert K. 1968. Social theory and social structure. Revised and enlarged edition. New York, NY: Free Press. Mica, Adriana, Arkadiusz Peisert, and Jan Winczorek, eds. 2012. Sociology and the unintended. New York, NY: Peter Lang. Mintrom, Michael, and Phillipa Norman. 2009. Policy entrepreneurship and policy change. Policy Studies Journal 37 (4): 649–67. Moden, Gary O., and A. Michael Williford. 2002. Ohio’s challenge: A clash of performance funding and base budgeting. In Funding public colleges and universities for performance, ed. Joseph C. Burke, 169–94. Albany, NY: SUNY Press. Moynihan, Donald P. 2008. The dynamics of performance management: Constructing information and reform. Washington, DC: Georgetown University Press. National Conference of State Legislatures. 2014. Performance funding for higher education. Denver, CO: National Conference of State Legislatures. Available from http://www.ncsl.org. Natow, Rebecca S., Sosanya M. Jones, Kevin J. Dougherty, Hana Lahr, Lara Pheatt, and Vikash Reddy. 2014. Changes in institutional policies, programs, and practices in response to performance funding in three states. New York, NY: Community College Research Center, Teachers College, Columbia University. Ohio Board of Regents. 2013. State share of instruction handbook: Providing the methodology for allocating state share of instruction funds for fiscal year 2014 for use by: University regional and main campuses. Columbus, OH: Ohio Board of Regents. Available from https://www.ohiohighered.org. Parmley, Kelli, Allison Bell, Hans L’Orange, and Paul Lingenfelter. 2009. State budgeting for higher education in the United States as reported for fiscal year 2007. Boulder, CO: State Higher Education

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014

184

THE ANNALS OF THE AMERICAN ACADEMY

Executive Officers. Available from http://www.sheeo.org/sites/default/files/publications/Budgeting_ For_Higher_Ed.pdf. Pheatt, Lara, Hana Lahr, Kevin J. Dougherty, Sosanya M. Jones, Rebecca S. Natow, and Vikash Reddy. 2014. Obstacles to the effective implementation of performance funding in community colleges in three states. New York, NY: Community College Research Center, Teachers College, Columbia University. Postsecondary Analytics. 2013. What’s working? Outcomes-based funding in Tennessee. Tallahassee, FL: Postsecondary Analytics. Reddy, Vikash, Hana Lahr, Kevin J. Dougherty, Sosanya M. Jones, Rebecca S. Natow, and Lara Pheatt. 2014. Policy instruments in service of performance funding: A study of community colleges in three states. New York, NY: Community College Research Center, Teachers College, Columbia University. Rutherford, Amanda, and Thomas Rabovsky. 2014. Evaluating impacts of performance funding policies on student outcomes in higher education. The ANNALS of the American Academy of Political and Social Science (this volume). Sabatier, Paul A., and Christopher M. Weible. 2007. The advocacy coalition framework: Innovations and clarifications. In Theories of the policy process, 2d ed., ed. Paul A. Sabatier, 189–222. Boulder, CO: Westview Press. Sanford, Thomas, and James M. Hunter. 2011. Impact of performance funding on retention and graduation rates. Educational Policy Analysis Archives 19 (33): 1–27. Shin, Jung Cheol. 2010. Impacts of performance-based accountability on institutional performance in the U.S. Higher Education 60 (1): 47–68. Shin, Jung Cheol, and Sande Milton. 2004. The effects of performance budgeting and funding programs on graduation rate in public four-year colleges and universities. Education Policy Analysis Archives 12 (22): 1–26. SPEC Associates. 2012a. National evaluation of Lumina Foundation’s productivity work: Interim report for Indiana. Detroit, MI: SPEC Associates. SPEC Associates. 2012b. National evaluation of Lumina Foundation’s productivity work: Interim report for Tennessee. Detroit, MI: SPEC Associates. Tandberg, David A, and Nicholas W. Hillman. 2014. State higher education performance funding: Data, outcomes, and policy implications. Journal of Education Finance 39 (3): 222–43. Tandberg, David A., Nicholas W. Hillman, and Mohamed Barakat. Forthcoming. State higher education performance funding for community colleges: Diverse effects and policy implications. Teachers College Record. Tennessee Higher Education Commission. 2008. Making opportunity affordable – Tennessee. Planning grant proposal. Nashville, TN: Tennessee Higher Education Commission. Tennessee Higher Education Commission. 2011. Outcomes formula technical details. Nashville, TN: Tennessee Higher Education Commission. Tennessee Higher Education Commission. 2012. 2012–13 Outcomes formula model. Nashville, TN: Tennessee Higher Education Commission. Available from http://www.tn.gov. Witham, Keith A., and Estela M. Bensimon. 2012. Creating a culture of inquiry around equity and student success. In Creating campus cultures, eds. Samuel D. Museus and Uma M. Jayakumar, 146–67. New York, NY: Routledge.

Downloaded from ann.sagepub.com at COLUMBIA UNIV on August 15, 2014