The most recent developments in university rankings have led people to conclude that highly erratic and largely arbitrary criteria are used to rank universities
Aspects of the ranking can either be readily "gamed" by institutions or make little to no sense. One could wonder how it's conceivable to compare the financial viability of a major public university and a tiny private university using the same formula. How do universities gauge the size of their classes?
Are these rankings probably flawed by design? Here is why this might be the case.
1. Inconsistency In Rankings
Columbia University fell from second to eighteen after questions were raised about the reliability of its data. However, this has led many to wonder if the rating system is really susceptible to such simple manipulation.
The school was charged with reporting figures that were "inaccurate, questionable, or very misleading," according to Michael Thaddeus in a blog post from February. Interestingly, the institution acknowledged that it had computed some numbers incorrectly in a statement.
Any school that can fall from No. 2 to No. 18 in a single year discredits the entire ranking procedure.
2. False Status Fixation
It's not just U.S. News that thrives off of people's obsession with where they stand socially; publications like The Wall Street Journal, Forbes, and Washington Monthly all provide college rankings.
It's predicated on a somewhat illogical notion that you're more likely not only to get jobs, but also to get noticed, and to have strong contacts. As the saying goes, "you'll have a pedigree," and in the United States, a bit of that is granted by family, but most of that is conferred by education.
3. False Statistical Objectivity
Even with the 17 factors and sub-criteria utilized by U.S. News, such as reputation (20 percent), student selectivity (7 per cent, with SAT and ACT scores weighted at 5 percent), and debt borne by graduates, colleges were still far too complex to be effectively reduced to a single figure (5 percent).
Although no ranking is flawless, it is possible to conduct longitudinal studies that compare how different schools perform over time using the same criteria for assessment.
To meet these criteria, some colleges would fraudulently cook figures and submit outdated or incorrect data.
According to the school's assertion in last year's rankings, 83% of Columbia's classrooms had fewer than 20 students. According to a Columbia University press release which followed later, the university's fall 2021 undergraduate classrooms will have an average of 5.8 students per class.
One year ago, Columbia boasted that all of its tenure-track professors held "terminal degrees," the highest possible designation in their fields. Later, Columbia downgraded the estimate to around 95%.
4. Fear and Apprehension
The peer assessment, a survey of schools sent out to presidents and deans, has caused the most concern among critics of these rankings. U.S. News' reputation poll accounts for 20% of the total score, but critics say it's difficult for anybody to know enough about hundreds of schools to evaluate them fairly.
Nonetheless, schools continue to help U.S. News with their rankings because they are scared that if they don't, the publication would find data elsewhere that could be detrimental to their reputation. The same could be the case for any other publication.
5. Scale And Separation Illusions
When reading the US News ranking, people get the impression that the difference between #1 and #10 or #30 is huge when in reality, schools tend to be "clustered" together, with the difference between the top 15 being almost non-existent, and the difference between the top 50 being far smaller than most people realize. You can succeed at Harvard and MIT if you can succeed at UCLA or Princeton.
Any ranking system is only as good as the criteria it uses. What factors influence ratings? How are the terms "top" and "best" defined when they are used interchangeably? What exactly does "best" mean?
With the criteria at the disposal of the institutions, it means they can easily manipulate their stats. You can boost your stats by having more students apply and not accept than the previous year, which makes you appear far more selective than you actually are.
You can also waive admission fees for students with SAT scores above 1400, which will make you look better. The assertion that rankings of the US universities are flawed by design is therefore permissible.