Joe Nocera has a nice piece in Friday’s Times about college rankings. He argues that the popular magazine rankings of colleges are crude, at best: “U.S. News likes to claim that it uses rigorous methodology, but, honestly, it’s just a list put together by magazine editors.” He’ll get no disagreement from most academics, who know well how sloppy most magazine methodologies are, but is it possible we’re missing the point?
The larger point, I think, is that there’s enormous value in measurement systems — “metrics,” as they’re sometimes called. Even crude measurements can be better than no measurement at all. There are, however, two inherent difficulties with doing it well: (i) many of the things we care about are difficult to measure and (ii) any specific measurement system can be gamed or even corrupted. You need to take both into account when you use them.
There’s nothing special about colleges here, you see the same issue all over. Profitability of firms: there’s a reason accounting is a profession. Performance evaluation of individuals: there’s almost never a mechanical system that does this well. Bond ratings: ditto. College football: how do we know who the best teams are when they don’t play each other? What does “best” even mean? I’m sure you have your own examples.
So what’s the solution? First, we need to work constantly toward better measurement. At the same time, we need to use some judgement about any measurements that come our way. It would be nice if there were an easier way, but there’s not. Meanwhile, we’re working hard here to give value to our students. We think they’ll notice, and that’s good enough for us.