Performance Accountability, Evidence, and Improvement: Bi-Partisan Reflections and Recommendations to the Next Administration

By Shelley H. Metzenbaum and Robert J. Shea, along with Fellow NAPAT16 Members

In the last few decades, we have learned a lot about what works and what does not in the quest to improve government performance. We have learned not only from the experience of the U.S. federal government, but also from that of state and local governments, the private sector, and foreign governments. Based on those lessons and our experience as two former Associate Directors at the U.S. Office of Management and Budget (OMB) responsible for federal performance measurement and management policy, one during the Obama Administration and one during the George W. Bush Administration, we offer here a roadmap for the next administration.[1] Rather than starting anew, we suggest building on the solid foundation that exists. At the same time, we urge avoidance of past missteps.

 

Overview

Government can and should benefit people’s lives. About this, we hope there is little debate.

The question is: does it? Does government advance the beneficial impacts it pursues and does it do so with minimal unwanted side effects? Beyond that, does it do so in ways that are not only effective but also efficient, fair, understandable, reasonably predictable, courteous, honest, and trusted? Moreover, does it apply the lessons of experience to find ways to improve?

Every government organization should strive to be effective and to improve, continually, on multiple dimensions. Toward that end, government should employ a common set of practices that, when used wisely, have worked remarkably well:

·      Setting outcomes-focused goals;

·      Collecting and analyzing performance information, both quantitative and qualitative;

·      Using data-rich reviews to identify what is working well and what needs attention, and to decide strategy, action, and knowledge gaps to fill;

·      Complementing routinely collected data with independent, rigorous evaluations and other studies; and

·      Using effective communication strategies for a wide variety of purposes aimed at a wide variety of stakeholders.

Common sense, backed by a robust body of evidence, calls for widespread government adoption of these performance improvement and evidence-based management practices. Failure to use these five practices leads to aimless operations. It leaves government and its partners carrying out activities they hope will work without knowing whether they, in fact, do. Moreover, it lacks the means to inform and encourage continual improvement once effective practices are identified.

Consider the alternative: government unclear about what it wants to accomplish; lacking objective means to gauge progress; failing to look for increasingly effective practices and emerging problems; introducing new programs, practices, and technologies without assessing whether they work better than past ones; and failing to communicate government priorities, strategies, progress, problems, and trade-offs in easy-to-find, easy-to-understand ways.

Experience and research make clear that unless government pairs these five practices with effective accountability and motivational mechanisms, it can easily lead to a culture of compliance, fear, or even worse, falsification. Government leaders (in both the executive and legislative branches) often call for linking measures and rewards or penalties (monetary or otherwise) despite experience suggesting the exercise of great caution before embracing explicit (and sometimes, implicit) pay-for-performance regimes (whether for individuals or for organizations). Too often, ill-structured incentive systems backfire, triggering dysfunctional responses such as measurement manipulation, adoption of timid targets that impede discovery and undermine trust, fear of testing and assessing new practices lest they fail, implosion of measurement systems, and a compliance culture where the “scaffolding” of the performance and evidence-informed management framework impedes rather than encourages innovation and adaptation.[2] Government can avoid many of these problems, experience shows, when it embraces a sixth practice along with the first five:

·      Adopting carefully structured, evidence-based motivational mechanisms that encourage a culture of learning, experimentation, and improvement.

This memo briefly reviews recent U.S. federal government experience using these practices and offers recommendations for the next Administration. Our bottom line recommendation is:

Aggressively accelerate wide adoption of a performance and evidence-informed management agenda across and at every level of government.

More specifically, we recommend:

·      Pushing more aggressively for adoption of the current outcomes-focused performance improvement framework across government;

            ·      Expanding and enhancing the collection, analysis, visualization and dissemination of performance information to make it more useful to more people;

            ·      Strengthening capacity and understanding;

            ·      Developing, testing, and adopting effective accountability mechanisms; and

            ·      Keep it simple to support use, communication and improvement of performance.

 

Experience and Lessons Learned

Goal-setting and measurement are hardly controversial. Many parts of government do it remarkably well, especially when Congress authorizes, requires, and funds measurement and analysis in policy-specific laws. Too many parts of government, however, do not.

To spur greater adoption of effective performance management practices, the federal government in 1993 adopted the Government Performance and Results Act (GPRA), requiring federal agencies to set goals, measure and report progress, and conduct and use evaluations. Agencies were required to publish strategic plans, annual performance plans, and annual performance reports. Strategic plans were expected to include information about strategies being used, resources needed to accomplish the strategies, key factors external to an agency that could significantly affect achievement of the goals, evaluations used to set goals and objectives, and a schedule of future evaluations.

As the Clinton Administration entered its second term, most federal agencies had begun producing five-year strategic plans, annual performance plans, and annual performance reports. Few, however, used goals to communicate priorities, coordinate across organizations, or tap the inspirational value of a specific, challenging goal. While most agencies included measures in their annual performance reports, few analyzed the data to find ways to improve. Nor did many use evaluation findings to set or revise goals as the law required, commission other studies to inform priority-setting and treatment design, or lay out a schedule for future evaluations and other studies.

The Bush Administration attempted to drive greater use of performance information – goals, measurement, and evaluations – in decision-making. In addition to focusing agency leadership on a regular review of a limited set of management objectives and trying to integrate performance with personnel management, the Bush management agenda incorporated a tool to produce program performance information with the intent that it be used in budget decision-making.[3] Called the Program Assessment Rating Tool (PART), agency officials and U.S. Office of Management and Budget (OMB) budget examiners used the tool to assess whether government programs were working. They assessed the quality of program design and outcome changes. The results, both the Bush Administration scorecard tracking adoption of mandated management practices as well as the PART ratings and the evidence on which they were based, were made available on the first government-wide website facilitating access to federal agency performance information, ExpectMore.gov. Site visitors could sort PART reviews by agency and program type, such as regulatory, credit, research and development, and grant programs, enabling programs of similar types to benchmark and learn from each other. The Bush Administration also issued an Executive Order requiring every agency to name a senior executive as its Performance Improvement Officer (PIO). PIOs were charged with coordinating the agency’s performance management activities and served on the newly created Performance Improvement Council (PIC).[4]

PART asked many of the right questions, but disagreements invariably arose from the reviews. Programs were sometimes scored poorly for problems beyond an agency’s control, while no mechanism existed to motivate high-scoring programs to continue to improve. Sometimes, emphasis was improperly placed on individual programs when program objectives required cross-program attention. And a five-year review cycle for all but low-rated programs did not exactly motivate action. Perhaps the biggest problem was that agencies paid more attention to getting a good PART score or meeting a higher percentage of their targets than to making meaningful performance improvements.[5]

In short, while progress was made, a strong compliance culture persisted. Agencies’ attention was directed to whether their programs were rated as successful or unsuccessful and to getting to green on the management scorecards, while a proliferation of goals and measurement in many agencies often rendered them meaningless. Exacerbating the problem, PIOs assumed most of the responsibility for satisfying the letter of the law, while program managers too often failed to engage and viewed measurement and evaluation as irritating burdens rather than helpful tools.

The Obama Administration sought to address shortcomings in the Clinton and Bush initiatives, increasing attention to using performance information to find ways to improve on multiple dimensions, including effectiveness and efficiency.[6] It also sought to communicate goals, trends, strategies, and planned actions to the public and other parts of government in ways that made them easier to find and understand, that supported collaboration and learning across organizational boundaries, and that motivated continual improvement.

Congress codified many of the best elements of Bush and Obama performance management practices in the GPRA Modernization Act of 2010 (Modernization Act). In addition to codifying the position of PIO and the role of the PIC, the Modernization Act required several new practices. Agencies were required to set a small number of ambitious priority goals they would try to accomplish within two years. These goals do not replace the fuller set of departments’ and agencies’ longer-term strategic goals and annual performance objectives; rather, they complement them and underscore the need for priority setting and immediate and continuing action. In addition, the law instructs the OMB Director, coordinating across government, to set a small number of cross-agency priority goals, some mission-focused and some for significant management issues.

Simultaneously, OMB directed agencies to increase the volume of high quality evaluations, both retrospective and prospective, to ferret out whether measured changes in outcomes would likely have been different in the absence of government action or if future adjustments to program design would likely accomplish more with the same or a lower budget.

In addition to changing some practices, the law introduced slight shifts in timing to bring the goals, measurements, and evaluations to life. The timing for setting strategic goals and objectives was aligned to Presidential elections, giving new administrations a chance to set new priorities.

Another significant change is the designation of deputy secretaries or their equivalent as chief operating officers (COOs), charged with running progress reviews on agency priority goals at least every quarter. These reviews are intended to stimulate analysis and discussion of performance information and other evidence to gauge progress, inform priorities and action design, and encourage discovery of increasingly effective, cost-effective actions. A number of deputy secretaries expanded the scope of these quarterly reviews beyond Cabinet-level priority goals to discuss and brainstorm progress on component and cross-component goals, as well. Performance Improvement Officers are given expanded roles and responsibilities, including supporting the COO in preparation for and follow up on the data-rich quarterly progress and annual strategic reviews.

Goal leaders accountable for managing progress on each priority goal, including cross-agency goals, are publicly identified on a new central performance reporting website, Performance.gov. Goal leaders are required to report progress on their priority goals every quarter on the site and explain to the public not only how well they are doing but also what adjustments are being made to previously announced planned actions, whether because of problems or higher-than-expected rates of progress.

The Modernization Act and the Obama Administration in its implementation of the Act also increased emphasis on building the capacity of and using the Performance Improvement Council, PIC sub-groups, and evaluation offices to function as continuous learning and improvement networks. OMB and the PIC designed and provided training on evolving practices, reaching across government to help agencies, for example, with effective goal-setting, strategic reviews, and evaluation methods. A behavioral insights office was established to help interested parts of government design, test, assess and adjust iterative, measured trials to find increasingly effective, cost-effective government practices.[7] To build capacity and more fully engage people in program offices and other parts of agencies, the Obama Administration created two additional learning-and-improvement networks during its second term: the Leaders Delivery Network and the White House Leadership Development Fellows.[8] In FY2016, Congress authorized appropriated funds to be reallocated from across government to support work on cross-agency priority goals.[9]

The Obama Administration adjusted accountability expectations to recognize that, by definition, stretch targets that stimulate innovation cannot all be met and the innovation process – testing, assessing, and adjusting to discover better practices – necessarily involves failed trials. To recognize this, the Administration encouraged the application of accountability expectations attributed to William Bratton, the New York City Police Commissioner who established CompStat, the frequent data-rich meetings to find better ways to reduce crime in New York City. “No one got in trouble if the crime rate went up,” Bratton’s right-hand man, Jack Maple, explained. “Trouble arose only if the commanders didn’t know why the numbers were up or didn’t have a plan to address the problems.”[10]

To reinforce the notion that accountability was not about meeting targets but about making progress at a good pace based on available knowledge and to inform goal setting, strategy selection, agency action, and budget decisions, agencies were required to conduct annual strategic reviews of progress related to every agency strategic objective. These reviews, and subsequent OMB review, identify for the public and Congress which objectives show noteworthy progress, which face significant challenges, and what the agency plans to do about it.[11] Agencies report their strategies, progress, problems, and adjustment plans for every agency strategic objective annually on Performance.gov.

So, how well are these changes working? According to a survey conducted by the U.S. Government Accountability Office in late 2012 and early 2013 and other evaluations, great progress has been made using agency and cross-agency priority goals (see illustrative examples, shaded, below), especially in agencies that embraced established principles of well-run, data-rich reviews.[12]

 

Examples: Performance Trends for Selected Priority Goals and Strategic Objectives

 

Improving patient safety. The U.S. Department of Health and Human Services (HHS) chose improving patient safety as a priority goal to reduce the problem that over 1 million healthcare-associated infections (HAI) were occurring every year across the U.S., affecting one in 25 hospitalized patients and costing tens of thousands of lives while adding large costs to the healthcare system. It identified catheter-associated urinary tract infections (CAUTI) as among the most common and preventable HAI. With leadership by the Agency for Health Care Research and Quality, HHS launched the Partnership for Patients to recruit 1000 hospitals and 1600 hospital units across the United States willing to test a comprehensive unit-based safety program. Preliminary results are good. Fourteen months after more than 700 participating organizations initiated recommended safety practices, CAUTI rates fell 13.5%, with a 23.4% relative reduction in non-ICUs and a 5.9% reduction in ICUs.[13]

Reducing patent processing times. Patents advance economic prosperity, so processing them in a quality and timely manner directly affects the nation’s economic health. To reduce the patent backlog, the U.S Department of Commerce made patent timeliness and quality a priority goal. The Patent and Trade Organization has reduced the patent application backlog from a high of over 764,000 in January 2009 to just over 558,000 in October 2015, a 27 percent reduction. Time for first action on an application (first-action pendency) has decreased by 8.8 months and total pendency, the time from filing until an application is either issued as a patent or abandoned, fell by eight months between the end of fiscal year 2009 and the end of October 2015. Also, from an all-time high near 112,000 in February 2013, the backlog of Requests for Continued Examination has dropped 68.2 percent. These improvements occurred despite unexpected growth in the number of filings, projected to be about 1 percent but actually exceeding 5 percent.[14]

Improving the accuracy and timeliness of Veterans’ disability benefit claims. The Department of Veterans Affairs aims to VA's provide disability benefits to eligible Veterans in a timely, accurate, and compassionate manner. From a peak backlog in March 2013 of over 610,000 backlogged claims, the claims backlog (defined as claims pending over 125 days) declined to under 72,000, an 88.3-percent drop. Total claims inventory dropped 58.9 percent from the peak of 883,930 in July 2012 to 363,034 on September 30, 2015, with claim-based accuracy at 89.0 percent and issue-based accuracy at 96.0 percent as of September 2015.[15]

Reducing greenhouse gas emissions by increasing federal agency energy efficiency and renewable energy consumption. To reduce greenhouse gas emissions by at least 40 percent from a 2008 baseline, a cross-agency priority goal was set to increase federal government consumption of electricity from renewable sources to 30% by 2025 and improve energy efficiency at federal facilities, including $4 billion in energy performance contracts awarded by the end of 2016. By the end of FY 2015, direct GHG emissions declined 17.6% and estimated Indirect GHG emissions decreased 17.5% from an FY2008 baseline; renewable electricity reached a level of 8.3% of total electricity use, and as of June 2016, agencies had awarded performance contracts valued at $3.171 billion with agency identified projects (awarded + pipeline) totaling $6.39 billion.[16]

Progress was made on many other goals, as well. Agricultural exports climbed as did federal technology transfer; over 100,000 miles of broadband were installed in previously low-served areas serving more than 700,000 new households and businesses; 4.4 million borrowers/subscribers in rural areas received new or improved electric services and 2.2 million more rural residents have access to clean drinking water and better waste water disposal, some for the first time. Homelessness, too, is down from 2010: 36 percent for Veterans, 22 percent for individuals, and 19 percent for families.[17] As expected, progress on a few priority goals has been more challenging, as, for example, on the goal to reduce foodborne Salmonella illnesses.[18]

At the same time, a compliance attitude and mindless measurement clearly persist in many places. GAO’s 2013 survey found that federal managers not working on priority goals did not report an increase in their use of performance information to make decisions. This inattention may be due, in part, to the absence of constructive external drivers similar to the pressure of competition that keeps most private companies continually looking for better ways to do business. Few in Congress or the media, for example, pay attention to agency goals, strategies, and progress. Whether the annual strategic reviews that began in 2015 increase the span and depth of agency interest in and use of performance information beyond priority goals has not yet been assessed.

So, based on what we’ve learned in the last several decades, where do we go from here?

 

Where to From Here?

Goal-setting, progress measurement, and using data and evidence to figure out how to do better! It sounds like motherhood and apple pie. These practices, in truth, are easier said than done. Hard decisions about what the goals should be – informed by data, values, politics, and competing objectives – need to be made. Skill must then be exercised to frame goals in ways that are resonant, relevant, motivating, and actionable. Developing meaningful, practical, and affordable measurements that capture not only progress on objectives but also warn of unwanted side effects can be hard, too, as can objective evaluation. When goals are poorly framed or otherwise mis-specified, when measurements do not make sense, or when evaluations are poorly designed or naively applied, enormous frustration arises. Yet without good performance information, government runs a high risk of acting without knowing what its actions accomplish or having the means to learn, objectively, how to do better.

Our bottom line recommendation therefore is: aggressively accelerate wide adoption of an outcomes-emphasizing, data-informed, evidence-based management agenda across and at every level of government. The worst thing the next Administration could do is start from scratch. To make even more progress and address known gaps, we offer the following five recommendations:

1.     Push more aggressively for adoption of the current outcomes-focused performance improvement framework across government.

 

·      Continue and expand uptake of the six practices listed above (outcomes-focused, priority-based goal-setting; routine measurement and analysis; occasional evaluations and other studies; data-rich reviews; well-designed communication; and well-structured incentives) across every aspect of government and with stakeholders. Expand Cabinet-level quarterly reviews on priority goals to include discussion of progress on component and cross-component goals, and require major components across the federal government – agencies, bureaus, large field offices – to begin using these six practices.

·      Better integrate efforts across program managers, performance improvement offices, program evaluators, strategic planners, futures forecasters, budget shops, grant and contract managers, data scientists, and IT offices to set goals, measure relevant indicators, and find ways to improve.

·      Continue agency and OMB annual strategic reviews to accelerate progress on all strategic objectives.

·      Increase use of rigorous, independent, and relevant evaluations and other studies to improve the effectiveness and cost-effectiveness of government programs and practices. Encourage more rapid testing, assessing, and adjusting using sufficiently rigorous evaluation methods to allow practice to evolve as experience is gained and to adapt to different circumstances.

·      Build a continuous learning and improvement culture in federal grant programs, with the federal government working with state and local governments, non-profit organizations, and other partners and stakeholders to discover and adopt increasingly effective, cost-effective, and fair practices, supported by ready access to easily understood data, multi-stakeholder collaborations, and well-structured incentives.

·      Establish a performance management knowledge exchange network that enables the federal government, state and local governments, non-profit organizations, and other partners and stakeholders to adopt the most effective outcomes-focused performance and evidence-based management practices to address shared problems and pursue opportunities.

 

2.     Expand and enhance the collection, analysis, visualization, and dissemination of performance information to make it more useful to more people.

·      Improve the accessibility, transparency, and usefulness of Performance.gov as a learning, benchmarking, coordination, motivational, improvement, and accountability tool. Post data in structured formats and make it easier to find relevant data and evaluations, as well as promising practices worth testing in other locations that, if successful, warrant promoting for broader adoption.

·      Make it easier to discern performance trends with “spark lines” and other visualization tools, especially in the context of social indicators (currently posted in the Analytical Perspectives of the President’s Budget) and agency and cross-agency goals. Create links to relevant data sets, evaluations, and other studies. Create and share “learning agendas” for agencies and operating units indicating plans for future evaluations, studies, and data improvement.

·      Enable sorting across goals by program type (e.g., credit, competitive grants, benefit processing, regulatory), geographic area, and demographic characteristics to facilitate cross-agency learning and collaboration, tapping evolving digital technologies.

·      Test, assess, and adjust to find better ways to communicate results and strengthen accountability, inform decision-making, stimulate discovery, and encourage innovation. Test the use of on-line crowd sourcing and feedback via Performance.gov and other platforms to get constructive feedback on goals, measures, evidence, and strategies. Test the use of Performance.gov and complementary online platforms to identify and support collaboration with others working to advance the same goals and learn from others’ experience. Test to find ways to present and share the information that aids individual and delivery partner decisions in a timely manner.

·      Strengthen the credibility of federal performance information and the ability to learn from experience by showing trend lines for longer periods on Performance.gov, and by re-posting and linking to archived information, including earlier rounds of priority goals and information from ExpectMore.gov (with PART scores for 1000 programs).

·      Tap mobile and other technologies that make it less costly and more feasible to collect, analyze, disseminate, and visualize information to make data and other information (e.g., photographs) more useful to more people across the policy-making and delivery chain.

 

3.     Strengthen capacity and understanding.

·      Appoint agency deputies/Chief Operating Officers and other political appointees with a strong capacity and commitment to use data and other evidence to improve performance.

·      Give Performance Improvement Officers adequate resources to support Deputies/Chief Operating Officers and increase resources to enable the PIC to provide more support to agencies; where Performance Improvement Officers have other duties, ensure there is a strong Deputy Performance Improvement Officer and team devoted to analyzing data and other evidence and structuring reviews and other conversations that drive continual improvement.

·      Strengthen capacity to conduct studies that inform priority setting and program design, including futures analysis, scenario testing, role-playing, epidemiology-like incident analyses, simulations, and surveys. Build appreciation that strategic planning is not just about continuing current approaches, but also looking for, and analyzing, alternatives. Integrate and strengthen government capacity to tap the vast array of analytic tools (e.g., outlier and anomaly identification, pattern and relationship detection, quality control) that can be used to inform priorities, design of agency practices, and identification of causal factors government may be able to influence.

·      Ensure every department has a robust evaluation and data analytics capacity that works with agency leadership and program offices to implement a strategic, rigorous retrospective and prospective evaluation program. Ensure that evaluation and analytics teams work with the PIO team to conduct successful quarterly performance and annual strategic reviews and to conduct ad hoc “deep dives” to find root causes of performance shortfalls or choose among competing problems and opportunities.

·      Regularly get feedback and develop and test ideas to improve outcomes, cost-effectiveness, fairness, and understanding from those on agency front lines and working in delivery partners.

·      Especially in light of the transparency requirements under the DATA Act, work with IT, contract, and grant offices to structure data systems and reporting requirements that will enable analytics more useful to a wider variety of people, including the central office, the field, delivery partners, and researchers. Incorporate user-centered design principles into agency programs, practices, and information management.

·      Build or support continuous learning networks across the delivery chain that share and analyze data to find and apply lessons learned and that collaborate on iterative testing and assessment to find better practices.

·      Expand knowledge of proven performance and evidence-based management practices by offering agency officials and others in the delivery chain relevant courses and other learning materials.

 

4.     Develop, test, and adopt effective accountability mechanisms.

·      Embrace and promote the Bratton accountability principle, while making sure to measure and manage not only primary objectives but also unwanted side effects. Communicate the expectation that failed trials and missed stretch targets are expected, not a problem, provided the trials are well-designed, targets ambitious, and progress quickly assessed and adjusted as needed.[19] Strengthen and publicize guidance language that conveys this message across government, preferably with a similar message coming from Congress.

·      Continually test, assess, adjust, and adopt increasingly effective motivational mechanisms such as peer benchmarking, transparency, constructive feedback, contests, challenges, and well-structured incentives that encourage continuous improvement. In addition, identify ineffective practices. Broadly communicate and encourage uptake of evidence about effective and ineffective motivational mechanisms.

·      Appoint leaders to the Office of Management and Budget committed to driving the development, adoption, and implementation of cross-agency priority goals; and identify a lead person in each of the White House policy councils and the White House Chief of Staff’s office to work on agency and cross-agency priority goals. Test designating each OMB Resource Management Office Program Associate Director as a goal leader responsible for managing progress on a mission-focused cross-agency priority goal.

·      Collaborate with Congress (i.e., authorizers, appropriators, and overseers) more closely at every stage of the performance management process to facilitate more debate about the performance of programs and successful adoption of the performance management framework. Urge agencies to collaborate with their Congressional authorizing and appropriation committees and incorporate their feedback on current agency goals and objectives, strategies, why they were chosen, progress to date, and challenges.

·      Create a culture that encourages employees to raise and focus on problems and pain points experienced by people interacting with the federal government without fear of punishment.

 

5.     Keep it simple to support use, communication, and improvement of performance.

·      Implement these ideas with easily understood tools, not as a framework checklist. In this spirit, at the end of this memo, we offer a set of suggested questions to share with all new appointees and career officials, urging them to use these questions as they approach their work to accelerate adoption of the six practices and, ultimately, improve government’s performance.

 

Conclusion

In the last several decades, we’ve learned a lot about what works and what doesn’t in the quest to improve government performance. Not only do we have the experiences of the federal government, but of state and local governments, the private sector, and foreign governments, as well. The insights here offer a roadmap for a new administration to use to ensure we build on the lessons of the past rather than start anew. If our new leaders, both appointees and career, take our advice, it will accelerate adoption of outcomes-improving, data-informed, evidence-based management practices across every level of government and in multiple dimensions. Results on the ground should improve, too.

**********************************************************************************************************************************************************************************************************************

Eight Questions to Drive Performance Improvement

1. What problem are we trying to solve?

      Why? How important is this problem or opportunity compared to others we need to pursue?

2. What strategies and tactics should we use and why?

      What have we or others done in the past and how well did it work? What are the relevant past performance, past evaluations, peer performance benchmarks?

      What are the key drivers/causal factors we can influence?

      What cultural constraints do we need to consider?

3. How will we know if we are making progress and making it fast enough?

      What are we measuring regularly and is it meaningful, measurable, and moveable?

      Are we using that information and how is it helping us make better decisions?

      Are there other measures we should be collecting and any we could drop?

      Who is analyzing the data, who gets the analysis, and what are we learning?

      Is it complete and accurate enough to be reliable?

      Can we identify the strongest performers and the weakest ones so we can learn from the former and help the latter?

4. What other information do we have that should inform our priorities and program design and what should we start to gather?

      What additional data or studies are needed?

      What does our data and evaluation plan look like and does it need updating?

      What does it cost to implement our programs and achieve our goals? If we don’t know, how can we better estimate the cost?

      Are there new approaches we can test to try to reduce costs without compromising impact?

5. Do we have the right people in the discussions about the data and other evidence to find ways to improve?

6. How are we helping the field and our delivery partners use data and evaluations to find ways to improve?

7. What training is needed and for whom? Where should our priorities be?

8. How do we motivate people to want to look for and find ways to improve and hold them accountable for doing that, not fearful or just compliant with planning, evaluation, and reporting requirements?



**********************************************************************************************************************************************************************************************************************



[1] This memo is informed by conversations with the bi-partisan “Transition 16” group of the National Academy of Public Administration, but the observations and recommendations are those of the co-authors. We would also like to thank Seth Harris, Ted McMann, Sharon Kershbaum, Kate Josephs, Jeff Porter, Matt Faulkner, John Kamensky, Harry Hatry, Joe Wholey, Hal Steinberg, Josh Gotbaum, Ned Holland, Steve Redburn, and OMB officials for their suggestions.

[2] Pink, Daniel (2011) Drive. New York: Riverhead. This book provides a good overview for the non-researcher on research findings about motivation and incentives affecting individuals. See also, the chapter on Building Block 5, in Metzenbaum, Shelley H. (2006) “Performance Accountability Expectations: The Five Building Blocks and Six Essential Practices,” Washington, DC: IBM Center for the Business of Government. (http://www.businessofgovernment.org/sites/default/files/Performance%20Accountability.pdf) Historically, those feeling threatened by measurements or pressured by ill-designed incentives have organized to dismantle the measurement system. See Gormley, William T. and David L. Weimer (1999) Organizational Report Cards. Cambridge, MA: Harvard University Press. Arguably, what Gormley and Weimer document occurred again with the No Child Left Behind law. Incentive problems arise in both private sector companies (consider Wells Fargo and Enron), and in the public sector, as with the doctor scheduling problems for veterans that came to light in 2014.

[3] The annual President’s Budgets, including the Analytical Perspectives, provide a good overview of the Bush and Obama Administration’s evolution in performance and evidence-based management practices for each budget year starting in FY2003. These can be found at: http://www.gpo.gov/fdsys/browse/collectionGPO.action?collectionCode=BUDGET. The Bush Administration introduced its budget and performance integration approach in Chapter III of the FY2002 President’s Budget (https://www.gpo.gov/fdsys/pkg/BUDGET-2002-BUD/pdf/BUDGET-2002-BUD.pdf ) and laid out the key elements of the PART in the President’s FY2004 budget (https://www.gpo.gov/fdsys/pkg/BUDGET-2004-PMA/pdf/BUDGET-2004-PMA.pdf ).

[4] Executive Order 13450. https://www.whitehouse.gov/sites/default/files/omb/assets/performance_pdfs/eo13450.pdf; for more detail on how the PART worked, see FY2009 Analytical Perspectives of the Budget Chapter 2. (https://www.gpo.gov/fdsys/pkg/BUDGET-2009-PER/pdf/BUDGET-2009-PER-3-1.pdf ).

[5] Metzenbaum, Shelley H. (2009) “Performance Management Recommendations for the New Administration,” Washington, D.C.: IBM Center for the Business of Government. http://www.businessofgovernment.org/sites/default/files/PerformanceManagement.pdf

[6] The Obama Administration introduced its performance improvement approach in Chapter 2 in the Analytical Perspectives of the President’s FY2010 budget (https://www.gpo.gov/fdsys/pkg/BUDGET-2010-PER/pdf/BUDGET-2010-PER.pdf ) and laid out the key elements of its performance improvement strategy in Chapters 7 to 9 of the Analytical Perspectives of the President’s FY2011 budget (https://www.gpo.gov/fdsys/pkg/BUDGET-2011-PER/pdf/BUDGET-2011-PER.pdf ). The FY 2011 budget also introduced a new Social Indicators chapter, AP Chapter 31; subsequent President’s budgets appropriately moved it to the front of the performance improvement discussion.

[7] The Social and Behavioral Sciences Team (SBST) is a subcommittee of the National Science and Technology Council (NSTC), which coordinates science and technology policy across the diverse entities that make up the Federal research and development (R&D) enterprise. SBST coordinates the application of social and behavioral science research to help Federal agencies advance their policy and program goals and better serve the nation. The SBST annual report https://sbst.gov/download/2016%20SBST%20Annual%20Report.pdf describes some of the evaluations it has helped agencies do.

[8] The Leaders Delivery Network is a subset of the agency priority goal leaders who come together regularly to learn from and brainstorm with each other and outside experts how to drive progress on their priority goals. The White House Leadership Development Fellows is a group of individuals competitively selected from across government to support implementation of cross-agency priority goals and other cross-agency initiatives.

[9] P.L. 114-113, Consolidated Appropriations Act of 2016, Division E, Title VII, Section 721 (p.129 Stat 2478) https://www.congress.gov/bill/114th-congress/house-bill/2029/text.

[10] Maple, Jack and Chris Mitchell (1999) The Crime Fighter: Putting the Bad Guys out of Business. New York: Doubleday, 33.

[11] The law requires agencies and OMB to identify goals where targets have not been met and describe plans and the senior officials responsible for managing progress on unmet goals. (U.S.C. Chapter 11, title 31, section 1116; GPRA Modernization Act Section 4.) OMB guidance on strategic reviews can be found at https://www.whitehouse.gov/sites/default/files/omb/assets/a11_current_year/s270.pdf. The full set of OMB guidance on the GPRA Modernization Act can be found at https://www.whitehouse.gov/sites/default/files/omb/assets/a11_current_year/s200.pdf. For sections 210-290 of the guidance, replace the number 200 with other section numbers.

[12] Moynihan, Donald P. and Alexander Kroll (March/April 2016) “Performance Management Routines That Work? An Early Assessment of the GPRA Modernization Act,” Public Administration Review, Volume 76, Issue 2, 314–323. U.S. General Accountability Office (2013) MANAGING FOR RESULTS: Executive Branch Should More Fully Implement the GPRA Modernization Act to Address Pressing Governance Challenges, GAO-13-518: Jun 26, 2013. http://www.gao.gov/products/GAO-13-518).

[19] To avoid agency temptation to “game” the system by picking timid targets that can be easily met but that fail to encourage innovation, current OMB guidance makes clear that OMB will consider it a problem when an agency meets all of its stretch targets. OMB guidance states, “Agencies are expected to set ambitious goals in a limited number of areas that push them to achieve significant performance improvements beyond current levels…. OMB generally expects agencies to make progress on all of their ambitious goals and achieve most of them, but at the same time will work with an agency that consistently meets a very high percentage of its ambitious goals to assure it is setting sufficiently ambitious goals.” Section 200.5 of OMB Circular A-11. (https://www.whitehouse.gov/sites/default/files/omb/assets/a11_current_year/s200.pdf) See also FN 11.