US News & World Report (USN&WR) has offered such rankings for many years as they proved of interest to college-bound students (and their parents, most likely).
Over time, colleges started to take note of their ranking and, as has been observed, many colleges started to focus on the metrics (possibly to the detriment of their primary focus) to improve their standing. Could millions of dollars in funding and alumni donations be at stake? Could the quality of applications change based upon a school's ranking? Perhaps.
Forbes relies on the work of the Centre for College Affordability and Productivity (CCAP) to prepare their ranking. CCAP shares reports publicly that cover a range of interesting topics centred on American Colleges and I gather they take their work very seriously and earnestly.
With regard to the ranking they prepare for Forbes, they provide a fairly detailed summary of their criteria, their sources and their methodology, and, they provide a summary ranking for each of the criteria by school.
CCAP uses five general categories:
1. Student Satisfaction - Student evaluations, retention rates (27.5%).
2. Post-Graduate Success - Who's Who, American Leaders and Salaries (32.5%).
3. Student Debt - Debt loads, default rates, debt percentage (17.5%).
4. Four-year Graduation Rate - four year graduation rate (11.25%).
5. Academic Success - student awards and alumni with PhD's (11.25%).
Remember that Forbes' tagline is "The Capitalist Tool" so they have a focus on capitalism and, in general, have a penchant for monetary returns and professional acumen as a measure for success.
So, it is not surprising that they weight salaries, graduation rates, and academic achievement highly. Curiously, however, the measure with the highest weight (at 17.5%) is Student Evaluations from RateMyProfessor.com (RMP).
Now, the good folks at CCAP have done their homework on assessing the validity of the RMP data and they have done their own analyses (as well as corroborating work by others) to qualify the value of these data.
I have read a reasonable number of blog posts that question the CCAP data but I'll not quibble with the detractors for now, but, why is it that a subjective measure from students (the most confounding of all consumers as the less you give them the happier they are!) on a populist website would have the highest individual value in the overall ranking?
CCAP generates a score for each element of the five categories and in some fashion (which is a mystery to me) generates a total score.
As stated above, they do weight the value of the scores but it's not clear to me if (or how) they normalize or scale the scores such that they can be meaningfully combined.
I do know that in the tabular summary for each category, CCAP provides information for each school by its rank order.
Is the total score based upon a school's rank within each sub-category? I hope not! But, I can't say it's obvious to me how the final score is determined.
I wrote to CCAP (they provide an address on their website) seeking more information on the actual scores for each school. I received a timely reply that stated, "Unfortunately, we are contractually obligated not to make the raw RMP scores publicly available."
Fair enough; if Forbes paid for this work to be done, they can do with it as they wish, but, I do not believe it's a good idea to keep the data private for the schools, their professors, students and alumni, for Forbes or for CCAP.
Here's a simple anecdote of explanation: I ran into a similar situation reviewing wages for teachers in several districts where I live.
One local union was upset that their wages were ranked among some of the lowest in the county while all measures of success (test scores, enrolment and much more) were ranked at or near the top.
It is not difficult to see why such a situation could turn into a major issue with placards and protests! Alas, pay rates are not a complete measure of compensation (medical benefits, retirement, days worked, etc..) and even if they were, the variations in pay rates were not egregiously different.
Sure, if I'm making $500 less than someone else doing the same job, I do care and I would want the situation rectified; I do get that point. But, for union leaders to fan the flames with incomplete statements about pay scale rankings does more harm than good.
So, I ask CCAP, again: What are the actual scores, particularly the RMP scores, for these colleges? How significantly do these scores vary by category? Are the top ranked colleges really that much better than those at the bottom by your quantitative measures?
It simply is not sufficient for a magazine of Forbes' stature and readership or for CCAP to present information in a manner that cannot be substantiated.
|