The Minnesota Historical Society’s Local History Services helps Minnesotans preserve and share their history. This blog is a resource of best practices on the wide variety of museum, preservation, conservation, funding, and non-profit management topics. We’re here to help.
Hanging Chads of Performance Measurement
Commission on Private Philanthropy and Public Needs
This federal commission conducted its work 1973-1975, and is often better known as the Filer Commission, after its chair John Filer. Its purpose was to investigate nonprofit charitable grantmaking with regard to the Internal Revenue Service’s oversight of taxation. One of the recommendations was for a federal advisory committee on private philanthropy and public needs to oversee nonprofits to make sure that they worked for the public good. Although temporarily begun, this committee was disbanded in 1978.
The Filer Commission authored the landmark report, Giving in America: Toward a Stronger Voluntary Sector (1975). The 248-page report is considered one of the most comprehensive studies of nonprofits, and informs most subsequent investigative initiatives.
The notion of studying nonprofits then sparked the discussion over who should study them. Should it be the government, or nonprofits themselves? Both have their problems. On the one hand, heavy regulation comes at a high cost both in money and flexibility for innovation. On the other, as noted in Data for Dollars, if nonprofits diagnose themselves, how valid, objective, or reliable is that self-study? With the federal advisory committee unsupported, a new nonprofit emerged called Independent Sector.
Independent Sector in the 1970s and 1980s set out to maintain distance between funding and study to avoid the appearance that money bought opinions. The success of their efforts naturally has been debated since. All the same, the research available on its website underscores both the kind of study needed about nonprofits and the sector’s impact.
Integrity and Confusion
This debate over who should collect data about nonprofits and interpret the meaning of the data really has never been settled.
The positive outcome of the debate has kept attention squarely focused the integrity of the data and methods. Transparency is often quite high, and if one spends time consider conclusions drawn, a great deal of clarity can emerge.
However, the negative outcome is that the field constantly receives requests to fill out yet another survey. One report after the next has new recommendations as to what to do, and sometimes distinguishing the value of each report is difficult. Confusion can reign.
Beryl Radin’s 2006 book Challenging the Performance Movement: Accountability, complexity and democratic values, examines the federal government’s use of measureable outcomes in the 1990s. She wrote “Increasingly, citizens both within the United States and across the globe are unwilling to blindly accept the level of work of a range of institutions within their societies. These include not only government institutions but also foundations and organizations in the health sector, education, and other areas.”
And for good reason – how many of us are concerned to read in the media about ‘wasteful’ spending? Thus many 'silver bullet' efforts to quantify performance have been put in place as safeguards, despite the complexity and nuance of programs that is often unquantifiable in any meaningful way.
Radin comments about this paradox, and then further notes four problems with the way performance was measured when she worked for the Department of Health and Human Services.
- First, “the agency officials who had the most difficulty complying with GPRA [Government Performance and Results Act of 1993] requirements were the very people who were most concerned about achieving effective programs.”
- Second, performance measurement “tended to be insensitive to differences.”
- Third, performance measurement “often bypassed the judgments of professional staff members who were essential to program implementation success.”
- And finally, performance measurement “rarely acknowledged the complex goals of public action, and instead, focused only on efficiency outcomes.”
Outcome Based Evaluation
If all things eventually reach a well-intentioned but ultimately unpractical state, then the drive toward performance measurement reached that with Outcome Based Evaluation (OBE). OBE seeks to set performance goals at the beginning, which if not satisfied fully, can have an adverse affect on the program’s priority in the future, which naturally raises certain fears in project managers about what happens if the effort is successful yet just misses the projected target.
The problem with OBE is that OBE seeks to simplify complex solutions and focuses on efficiency. While the finite resources available for nonprofits should never be squandered, determining measurable outcomes should recognize the complexity of what nonprofits seek to achieve and focus on effectiveness.
Two Great Questions
Therefore in terms of measuring the value of nonprofit historical organizations, these two questions remain open for debate.
First, who should collect and interpret the data? Should that be government or another party, or the nonprofit sector itself? No matter who is collecting and interpreting, how will those charged with this task gain appropriate training?
Second, how can nonprofits balance good stewardship for the finite resources available to them with the need for demonstrably effective (rather than purely efficient) programs? In other words, how can nonprofits show donors that their gifts have been responsibly used to positive result?