Evaluating charities, part I

I’ve been wanting to post for a while about the different sites out there that evaluate charities and help donors decide how to best spend their money.  I’ve finally done the research, and I think I’m actually going to split this into at least two posts — possibly a longer series.  In Part I, I’m going to discuss three of the oldest and best known charity evaluators, all of which grade organizations in large part according to financial metrics.  I’m also going to discuss the controversy over using such metrics, and the pros and cons of these sites.  In Part II, I’m going to discuss some newer sites that are finding other ways to evaluate nonprofits.

I’m going to start by talking up front about the controversy involved with using financial metrics, because it’s interesting and it’s a good thing to keep in mind when reading about the sties below.  All of the evaluators I’m about to discuss measure, among other things, the amount of an organization’s income that goes toward administrative overhead.  The idea is that organizations should be efficient, and should be spending their money on the cause rather than large salaries, unnecessarily expensive resources, and general bloat.  While this idea seems reasonable up front, it has a number of critics.

The blog Good Intentions Are Not Enough (which I’ve been enjoying a lot in general) has an excellent post on why you should not donate based on administrative overhead. The arguments include the following points:

  • the amount spent on administrative costs does not correspond to the quality of the work done by the organization
  • charities can game the system to make the numbers look better
  • the evaluation system may hurt charities by causing them to cut back on staff or resources, or to prioritize their projects in ways that help their ratings but are not best in terms of the organizations’ results

Tim Ogden presents additional arguments at Philanthropy in Action:

  • The rules for determining overhead costs are vague and every charity interprets them differently
  • Accounting experts estimate that 75% of charities calculate their overhead ratio incorrectly
  • It discourages charities from investing in tools and expertise that would make them more effective

GiveWell (which I’ll be talking about in detail next post) has a funny/scary illustration of why this kind of thinking is so damaging in evaluating nonprofits.

Additionally, a study by Lowell, Trelstad, and Meehan done in Summer 2005 (note that that was 5 years ago; some of the sites’ metrics have changed) drew the following troubling conclusions:

We conducted a detailed study of the agencies to determine how useful a service they provide. The results were sobering: Our review of their methodologies indicates that these sites individually and collectively fall well short of providing meaningful guidance for donors who want to support more efficient and effective nonprofits…. They rely too heavily on simple analysis and ratios derived from poor-quality financial data; they overemphasize financial efficiency while ignoring the question of program effectiveness; and they generally do a poor job of conducting analysis in important qualitative areas such as management strength, governance quality, or organizational transparency.

In part, the study tried to use the three leading financial-based charity evaluators to choose between 7 of the largest recipients of tsunami aid (this paper was written in the aftermath of the December 2004 tsunami).  They found that all the organizations essentially received excellent ratings from all the sites, meaning that the sites did not provide a basis for choosing between the organizations.   While this is less than ideal, it’s hard to tell whether this is really an error on the part of the sites; what if all 7 organizations really were quite efficient and effective?  A more extensive analysis of how the charities’ ratings correlate with performance would have been more useful.  The study also had more specific critiques of each site, as well as specific suggestions, all of which I’ll cover below.

And now, on to the charity rating sites:

Charity Navigator is one of the best known websites evaluating charities.  They have a complex but fairly transparent system of ratings.  The ratings are all financial, and break down into the following measurements:

  • organizational efficiency
    • percent of budget spent on programming expenses
    • percent spent on admin expenses
    • percent spent on fundraising expenses
    • fundraising efficiency — amount of money that is spent to raise $1 in donations
  • organizational capacity
    • primary revenue growth — are they outpacing inflation?
    • program expenses growth — are they supporting more and bigger programs over time?
    • working capital ratio — how much does the company have immediately available in liquid assets?

Each of the areas of submeasurement is translated into a separate rating and then they are all combined into an overall rating from 0 to 4 stars.

Lowell et al. note that Charity Navigator only uses the past year of financial records to make their assessment (this may have changed somewhat since 2005, but I still find several places on the site that refer to using only the most recently filed tax document).  This can lead to an incomplete picture and large fluctuations in ratings from year to year.  On the plus side, they note that Charity Navigator recognizes (appropriately) that not all types of charity work should be measured by the same financial standards:

The site also uses a range of financial ratios and peer benchmarks, taking into account that cost structures (and thus financial metrics) may vary by subsector. For example, food banks, because of their reliance on donated goods, may have less need for a certain level of cash reserves as a percentage of their revenues, or that public broadcasting, because it uses expensive airtime for fundraising, may have slightly higher fundraising costs.

In the end, by looking only at the financial situation, Charity Navigator does nothing to evaluate the effectiveness of the organizations in terms of meeting their programming goals. This situation is not improved on by the second organization we will look at, the American Institute of Philanthropy (AIP). Instead of a 4-star rating system, AIP hands out letter grades.  AIP’s method is mostly a combination of variants on the Programming Expenses and Fundraising Efficiency measures used by Charity Navigator.  They additionally give out demerits to organizations that have a large amount of assets in reserve — the idea is that organizations that already have a large amount of money, such that they could continue to stay afloat for some time without donations, don’t urgently need more money. (This is different from Charity Navigator’s ratings system, though they do have a cutoff in their Working Capital Ratio measure after which a company does not benefit in their ratings for having more liquid assets; they also recognize that having too much in reserve can be counterproductive.)

Unlike Charity Navigator, AIP does not work just from last year’s tax forms.  Instead, they analyze charities’ audited financial statements. Lowell et al. praise this, but they criticize AIP, however, for often not having enough transparency in their system of ratings and ratings adjustments.  (Again, I haven’t extensively checked some of these statements to make sure they’re up-to-date, partially because Lowell et al. did not provide copies of the analyses they received at the time, so it’s hard to compare current analyses).

The final charity evaluator that Lowell et al. discussed was the Better Business Bureau’s Wise Giving site, which goes somewhat beyond financial metrics.  This site tries to offer a more in-depth analysis of businesses according to 20 Charity Accountability Standards — each charity is issued a pass/fail on each of the 20.  The standards try to insure that the organization:

  • has active oversight by an independent board
  • follows procedures to regularly assess their own effectiveness (NB: this does not mean that the BBB measures their effectiveness — merely that the organization needs to have regular evaluation and oversight in this area)
  • has high programmatic spending/low admin overhead (similar to AIP’s and Charity Navigator’s measures, but a binary cutoff)
  • has efficient fundraising ((similar to AIP’s and Charity Navigator’s measures, but a binary cutoff)
  • does not accumulate too many assets in reserve (similar to AIP’s penalty)
  • has full transparency about finances and expenditures
  • provides accurate and complete information about organization
  • provides accurate and fully informative fundraising materials
  • has a process for addressing concerns of donors and complaints through the BBB

(Yeah, I know that’s less that 20, but that’s what they boil down to.)

The Wise Giving site provides a fuller picture than either of the previous two sites discussed in some ways — though by only providing binary measures, there’s little room for differentiation between organizations that pass the test.  This is a good site for spotting shady behavior, although sometimes organizations don’t pass due to what could be vagueness or laziness rather than malicious intent.  For example, the Lambda Legal Defense and Education Fund failed to meet one of the standards because

The organization participated in the sale of consumer goods that indicates the organization will benefit from these purchases but does not specify the portion of the purchase price that will go to the charity. Specifically, the promotion states that, “10% of net proceeds” will benefit LLDEF.

It seems likely that not all organizations that fail to meet some of the standards are bad or unworthy of donations.  However, if you don’t know much about a nonprofit and are trying to make sure they’re on the level, this is certainly a good place to look for information.  Unfortunately, because their reports are so in-depth, the BBB has a limited directory of charities that it has reviewed — and many of the charities that they tried to analyze didn’t provide the necessary information.  Still, many organizations are on the list, and it’s worth checking out.

Overally, it’s very important to keep in mind that administrative costs do not tell the whole story about a charity, and to keep that in mind when using the above sites.  But the above sites do all offer some useful information, and are probably worth taking into consideration when figuring out where to donate money (especially if you’re trying to make sure that a particular charity is not a scam and doesn’t have any seriously alarming financial patterns).  All the sites also have useful tips on topics such as researching charities, donating online, getting off of charity mailing lists, and so on.

Next post, I’ll look at some additional resources that offer some alternative evaluation metrics — including sites that rely on expert evaluators, sites that try to empirically evaluate organizations, and the Yelp of nonprofits.

Comments are closed.