Measuring Meaningful Differences: College Rankings and Identity

by cv harquail on September 16, 2010

Here’s a mini-exam for you.

College ranking systems are:

A. A great way to sell magazines and get your publication’s name in the news
B. A scam that preys on the social and economic insecurities of educational organizations
C. A somewhat-helpful guide to prospective students
D. A process that is entirely gamed by the organizations that are being ranked
E. A problematic way to assess the meaningful distinctiveness of any institution
F. All of the above

If you chose F, go to the straight to the next question.

Everyone from Forbes to Business Week to US News & World Reports to The Economist to the Princeton Review has been ranking colleges, and graduate programs. Every year when these rankings come out we hear all about all the ways in which they are flawed.

When we step back and look at the whole picture, it makes me wonder whether the distinctions that these ranking make as they compare schools are very meaningful. Are these just a way to show various differences among schools? Or, are these authentic distinctions along meaningful criteria?

Do these rankings tell us anything meaningful about the organizations that are ranked?

Rankings vs. Meaningful Differences

These ranking systems tend to emphasize the financial assets of the institution, the academic potential of the student body, the school’s popularity among 17 and 18 year olds, and the perceived prestige and/or elitism of the institution. Every year they seem to add more and different measures, as though the sheer amount of data in the survey can make the distinctions among schools more meaningful.

Increasing the number of different measures makes the rankings more useful to some potential students (and their parents), to the degree that the rankings incorporate components that are important to the student. Some students do want to compare the the number of varsity sports teams from one school to another.

But in terms of telling us what those colleges are like, what defines them, what makes them significant, these long rows of numbers don’t tell us much at all.

As Jo Ellen Parker, President of Sweet Briar College, explains:

Rankings lists can produce strange conjunctions. On the Forbes.com list this year #87 is Sweet Briar, and #88 is Johns Hopkins University. While I have no doubt that Forbes’ methodology genuinely produced these results, it strikes me that these two excellent institutions are so different in nature and situation that their appearance side by side is almost startling. (emphasis mine)

Comparing Johns Hopkins and Sweet Briar overall is rather like comparing, well, a laptop and an autoclave. Both might be highly rated, but they’re far from interchangeable.

As Parker points out, these schools are ranked close together and rated as being pretty similar, when in fact they are not really alike at all.

Metrics about inputs (e.g., % applicants from USA) and metrics about component parts (e.g., university endowments) don’t necessarily convey information about an organization’s important qualities.  The schools’ average SAT scores and number of varsity sports teams does not help us understand either school’s core identity, what defines that school and its community, and what values that school holds dear.

Meaningful Differences: Identity and Core Values

201008251620.jpgIn contrast to mainstream ranking strategies (e.g., those employed by Forbes, Business Week, US News & World Reports, etc.) and in contrast to the ethos behind these rankings is the approach taken by Washington Monthly. Washington Monthly focuses not on the prestige or elitism of the institution, but on how well these schools serve the public interest.

Washington Monthly ranks schools … “based on their contribution to the public good in three broad categories:

  • Social Mobility (recruiting and graduating low-income students),
  • Research (producing cutting-edge scholarship and PhDs), and
  • Service (encouraging students to give something back to their country).”

Why measure these outcomes?

By measuring these outcomes, Washington Monthly is sharing data about how committed these schools are to the values that underlie these outcomes. That, in turn, tells us something about the qualities of the school itself.

For example, knowing that Bryn Mawr College ranks 7th this year in the percentage of undergraduates who go on to get PhDs tells us more about the intellectual caliber of the experience at this college than its 49% acceptance rate (USNWR).

But even more important, Washington Monthly rankings measure what kinds of transformations these institutions are able to create for the students who join them and graduate from them.

Organizational Identity and Transformations

You can get terrific insights about an organization by looking at what happens with/to the people who join it.  Washington Monthly’s rankings tell us something about the organization’s positive approach to diversity, its orientation towards learning, and its orientation towards contributing to the world. These rankings give us a sense of what values inform the community, and what values will be emphasized by the members.

Of course, not every school values diversity, learning, and contributing to the world, not every school makes these values part of their core identity, and not every high school senior wants their college experience to emphasize these values.  But Washington Monthly’s survey offers meaningful information to these schools and students– by showing them (and others) which schools choose not to emphasize these values.

It’s always difficult to compare lots organizations simultaneously on criteria as idiosyncratic as their identities.  Each institution’s identity is unique, and so you’re always comparing apples to oranges and to strawberries. Still, there are some comparisons that are meaningful– comparisons based transformations (outputs) that reflect values, and metrics that demonstrate values.

Comments on this entry are closed.