evaluating charities, part II
[Note: I started this post a number of months ago; some details might now be out of date.]
In a past post, I discussed sites that evaluate charities based primarily on financial metrics. Before leaving that topic entirely, I wanted to to also point out that Guidestar has a bunch of information about charities, their structures, and their finances. You can access the tax records from the past several years for many charities. They help you verify an organization’s nonprofit status and do other research into organizations. The site seems to be mostly aimed at people working in the nonprofit sector, large philanthropic organizations, businesses, and academic researchers. It’s kind of clunky and ugly and harder to use than some tools, sometimes some data is sparse or missing, and they charge money for some services. But if you want access to a lot more information directly from the company about their structure and finances, I’d check here first. This may also be useful in using The Charity Rater, described below.
Moving on to other ways to evaluate charities aside from financial metrics, in the rest of this post I will be exploring GiveWell (based on empirical evaluation of results!), GreatNonprofits (kind of like Yelp), Philanthropedia (expert-based), and The Charity Rater. If you know of other charity evaluating organizations or metrics that I haven’t discussed in either post, let me know, and I’ll cover it in a follow up.
GiveWell is a site that gives in-depth reports about charities, and empirical evidence relating to their actual effectiveness in the past. They also look at other issues, such as cost effectiveness, transparency, and whether it appears that the charity can productively use further funds. I’m really excited about the apparent empiricism of this approach. They seem to ask a lot of good questions about program effectiveness, provide whatever relevant empirical data they can find addressing these questions, and also note unanswered questions. They show charts and graphs as appropriate and have a bibliography at the end of their report. This seems fabulous! However, I haven’t had a lot of opportunity to investigate this site in detail for more than a few charities. I really want to know more about how they choose charities, how they collect and evaluate studies, whether they look for conflicts of interest and other possible issues with studies, and how they translate findings into rankings.
A few limitations: GiveWell has only evaluated around 400 charities (presumably due to the large amount of effort they put into each organization). And they have a limited number of areas that they currently seem to address: health, early childhood care, education, employment assistance, and economic empowerment. These are great causes, but if you’re interested in evaluating charities addressing other topics, you’ll probably have to look elsewhere. Also, I’m guessing that lots of small organizations’ programs have not been studied in enough detail to get ranked on this site. That doesn’t necessarily make them unworthy, but you’ll have to look for other metrics to try to make guesses about how effective and worthwhile they are.
GreatNonprofits is a site for finding and sharing reviews of nonprofits. As a reviewer, you list your role (e.g., donor, board member, parent of a child helped by the organization), give your review, and answer questions about the things you thought worked well as well as suggestions for making it be better.  Ratings are out of 5 stars. Some of the organizations have hundreds of reviews, and the reviews are often detailed and informative; you can learn a lot about people’s personal experiences with organizations through this site. There are also discussion threads in the reviews sometimes — especially when someone has something negative to say about a charity, sometimes someone from the organization will respond.
I like this site idea, and a lot about it appears to work well. Unfortunately, there are a few drawbacks. There’s sparse data on a lot of organizations right now, though hopefully that will change as the site grows. Additionally, while there are currently about 500 pages or organizations that have been rated, and 450 of those pages are entirely 5-star ratings. That makes the ratings not very useful as a comparative metric between different organizations. Another problem is that the ratings show up as soon as there is a single reviewer — meaning that a single really good or really bad review can put an organization at the top or bottom of the list when you sort by ratings. However, you can also sort by number of reviews and avoid the ones where there’s insufficient data. Other drawbacks: When people fill out the “Ways that this organization could be improved”, they often say things like “Nothing could be improved!” or “If everyone in the world donated to this nonprofit!” So you visually get the appearance of reviews that are nuanced and balanced, but the reviews may really be one-sided. As long as you actually read the content in detail, you see what’s really going on, but you can’t easily scan the page and get a sense of whether people are really listing more pros or cons.
Many of those problems are things that GreatNonprofits can fix with some tweaking, or things that will hopefully improve over time as more people use the site. But then there’s the fact that it’s relatively easy for an organization to stack the deck in their own favor and get people within the organization or donors to give multiple positive reviews as multiple personas. That’s the hardest problem to fix for a review site, as you can’t really easily when it’s happening. Apparently Yelp tries to detect such patterns; I don’t know if GreatNonprofits is also trying to do something of the sort.
Philanthropedia takes a very different tack. They choose particular problems to focus on and select panels of experts with experience in that area. The experts then decide which organizations addressing a particular issue are most effective, and they create “mutual funds” that you can donate to, where the money is divided between those organizations. Their philosophy on evaluating nonprofits is as follows:
We believe that evaluating nonprofit effectiveness is very challenging. However, there are professionals who are well-suited to assess nonprofit impact. Experts, such as foundation professionals, academics, and nonprofit executives have access to unique and non-public data about nonprofit performance. In addition, these experts are best-suited to interpret these data through advanced models for nonprofit effectiveness. Therefore, we rely on experts to identify which organizations they think are strongest.
The experts they choose include people who work in nonprofits, academics, philanthropic grant managers, government workers in sustainability and environmental planning, and more.
This is a fascinatingly different approach to the problem of donating effectively. I’d love to see an evaluation of how well these funds actually “perform”. I don’t know how exactly one would measure that, but I’d like to know if donating money to one of these funds is more useful than donating all my money to a single organization, for instance, since we’re often told to give money in fewer places for maximum effectiveness. Or if you donated the same amount to a random group of organizations that education work, how much worse is that than the mutual fund? I worry about this because experts often can’t really predict things as well as we think they can in complex systems — but on the other hand, I’m not sure how to measure performance here. Certainly, at the very least, by investing in one of these funds, you can rest assured that a bunch of smart, knowledgeable people have done their due diligence on the organizations and that significant research has gone into the allocation of money. As a lazy activist, I like the idea that this saves me work!
One thing of note is that Philanthropedia currently has a limited number of mutual funds topics — climate change, education, homelessness, and microfinance. They say more are coming soon.
The Charity Rater was created by the author of Good Intentions are Not Enough, an excellent effectivist blog dealing with aid in the developing world.  It is supposed to give you a method of evaluating aid organizations in particular, based on information found in the organization’s annual report or website. (Here’s a sample result.) I attempted to evaluate a few organizations and found it hard to find some of the necessary information (e.g., a detailed breakdown of the previous year’s expenses, and a copy of their financial audit/review). The creator acknowledges that lack of transparency can make the process frustrating, and that you may have to contact an organization to get some information. Other strengths and weaknesses are discusses on the site. An additional guideline from the site: “This system is not designed to be used with foundations or with umbrella organizations because they do not lead aid projects but instead oversee other organizations.”
I am a bit surprised that there isn’t a list of ratings for well-known aid organizations already on the site. I can see value in educating people about how to evaluate charities during the process of doing so, which means making them go get the charity information and answer the questions themselves. However, I also think it would be great to have pre-calculated ratings, and a discussion of what exactly the ratings reveal, available for some major aid organizations that many people donate to.