Skip to main content
News & opinionSIOSSpiegeloog 425: Vision

Beyond the Grid

By April 11, 2023January 23rd, 2024No Comments

About the Author

Nita is a first year MSc Biomedical Sciences student at the UvA. Interested in pursuing a career in psychopathological research, she is passionate about transparent scientific communication.

About the Author

Nita is a first year MSc Biomedical Sciences student at the UvA. Interested in pursuing a career in psychopathological research, she is passionate about transparent scientific communication.

Many of us chose to study at a particular school at least partially due to its position in (inter)national university rankings. The appeal of highly ranking schools is typically tied to their reputation for excellence – how the establishment may provide us with better networking and career opportunities as well as higher quality facilities and research platforms. “The alumni network of Erasmus University Rotterdam (EUR) is very strong and, even before graduating, I have already made valuable connections with successful people who have aided me in jump starting my professional career”, shared  a Master’s student at the EUR’s Faculty of Economics, “I don’t think I could have been employed by the firm I work for without the EUR stamp in my CV.”

While the facilitating effect of a network of influential people is undeniable in terms of future career opportunities, it is worthwhile taking a closer look at the methodology used by analytics companies that publish university rankings. The QS World University Rankings, for example, is a British analytics company that conducts global surveys on the institutions’ academic and employer reputations, receives data submissions from universities on various ratios (e.g. the number of international students), and utilizes publicly available citation counts to rank the universities internationally. Universities that are internationalized, produce high-quality graduates, have a relatively low number of students per academic staff member, and have a high count of citations per research paper are considered more prestigious and receive a higher position in the rankings. Although these variables and their weightages are reviewed periodically to ensure that they are robust and up-to-date, it could be argued that the indicators do not fully capture the complexity of what makes up a ‘good’ university. For instance, relying on citation counts to assess excellence is inherently biased. Although influential papers that present groundbreaking findings are – for obvious reasons – cited more, citation counts are influenced by various other factors such as the field of research. Fields such as biology have a higher publishing rate, inevitably leading to more references per article and a higher average citation count. This pushes universities with a life sciences focus higher up the ranks merely because the metric conflates good quality over fast production.

Although the QS World University Rankings have faced plenty of methodological criticism over the years, another analytics company, the U.S. News & World Reports (USNWR), has been put under even more scrutiny. USNWR is a Washington, D.C. based company that generates American university rankings directly from data that universities send them, including student graduation rates and standardized test scores. Arguably, if all students graduate on time and perform well in exams and assignments, it could be concluded that the university has provided a valuable and supporting learning environment. However, this analysis is not based solely on student performance, but also on subjective ratings of the university’s competence that academics around the country send in. Perhaps most interestingly, the methodology also takes into account the estimations of how much money the university alumni donate to their alma mater.

This has led to many universities deciding to withdraw from the rankings, with Stanford School of Medicine as one of the most recent ones. The dean of Stanford School of Medicine, Lloyd Minor, MD, notes that “the methodology, as it stands, does not capture the full extent of what makes for an exceptional learning environment”. Quickly afterwards, University of Pennsylvania’s Perelman School of Medicine jumped on the bandwagon as the faculty dean J. L. Jameson, MD, PhD, announced that the institution will no longer submit data for generating the rankings: the dean noted that although “the USNWR measures encourage the acceptance of students based upon the highest grades and test scores”, the Perelman school “strive[s] to identify and attract students with a wide array of characteristics that predict promise … creativity, passion, resilience, and empathy”.

These decisions have generated a lot of discussion, especially within the open science community. Although both the QS World University Rankings’ and the USNWR’s methodology and algorithms are publicly available and therefore fulfill the transparency aspect of the open science framework, they indirectly discourage other open science practices. For example, it may motivate researchers to prioritize publishing in prestigious and predatory journals for more citations and ultimately, status. Moreover, universities pay high fees to educational consultants to scrutinize the USNWR algorithm and recommend changes that could potentially improve the university’s ranking. While this could be considered a devoted attempt to better the learning environment, that is not where the money is actually poured into.

Importantly, both QS World University Rankings and USNWR receive much of their data directly from the universities, allowing gaps for falsification and even complete fabrication. For example, in the U.S., the responsibility of data submission to USNWR is often put on individual university staff members instead of specialized committees. Not surprisingly, the USNWR have stripped schools such as Claremont McKenna College, Baylor University, and Emory University of their rankings after finding out that they had submitted inflated SAT scores to boost their positions. These universities faced other consequences, too, such as resignations and firings of officials as well as changes to their admission processes.

While the ranking business is notoriously biased, the earlier statement by the EUR Economics student is still valid. Certain institutions may be able to provide us with better career opportunities, sometimes purely because of the network they allow us to form. However, our innate need of ranking things for their quality is challenged with the fact that we do not have reliably quantifiable variables to put into the formula, nor can we strip ourselves of our natural biases. Rationally, we know that it is possible for two universities to co-exist while pertaining different objectives: while some can be exceptional research universities, others may have more valuable contributions to practical applications of knowledge.

Student Initiative for Open Science

This article has been written as part of an ongoing collaborative project with the Student Initiative for Open Science (SIOS). The Amsterdam-based initiative is focused on educating undergraduate- and graduate-level students about good research practices.

Many of us chose to study at a particular school at least partially due to its position in (inter)national university rankings. The appeal of highly ranking schools is typically tied to their reputation for excellence – how the establishment may provide us with better networking and career opportunities as well as higher quality facilities and research platforms. “The alumni network of Erasmus University Rotterdam (EUR) is very strong and, even before graduating, I have already made valuable connections with successful people who have aided me in jump starting my professional career”, shared  a Master’s student at the EUR’s Faculty of Economics, “I don’t think I could have been employed by the firm I work for without the EUR stamp in my CV.”

While the facilitating effect of a network of influential people is undeniable in terms of future career opportunities, it is worthwhile taking a closer look at the methodology used by analytics companies that publish university rankings. The QS World University Rankings, for example, is a British analytics company that conducts global surveys on the institutions’ academic and employer reputations, receives data submissions from universities on various ratios (e.g. the number of international students), and utilizes publicly available citation counts to rank the universities internationally. Universities that are internationalized, produce high-quality graduates, have a relatively low number of students per academic staff member, and have a high count of citations per research paper are considered more prestigious and receive a higher position in the rankings. Although these variables and their weightages are reviewed periodically to ensure that they are robust and up-to-date, it could be argued that the indicators do not fully capture the complexity of what makes up a ‘good’ university. For instance, relying on citation counts to assess excellence is inherently biased. Although influential papers that present groundbreaking findings are – for obvious reasons – cited more, citation counts are influenced by various other factors such as the field of research. Fields such as biology have a higher publishing rate, inevitably leading to more references per article and a higher average citation count. This pushes universities with a life sciences focus higher up the ranks merely because the metric conflates good quality over fast production.

Although the QS World University Rankings have faced plenty of methodological criticism over the years, another analytics company, the U.S. News & World Reports (USNWR), has been put under even more scrutiny. USNWR is a Washington, D.C. based company that generates American university rankings directly from data that universities send them, including student graduation rates and standardized test scores. Arguably, if all students graduate on time and perform well in exams and assignments, it could be concluded that the university has provided a valuable and supporting learning environment. However, this analysis is not based solely on student performance, but also on subjective ratings of the university’s competence that academics around the country send in. Perhaps most interestingly, the methodology also takes into account the estimations of how much money the university alumni donate to their alma mater.

This has led to many universities deciding to withdraw from the rankings, with Stanford School of Medicine as one of the most recent ones. The dean of Stanford School of Medicine, Lloyd Minor, MD, notes that “the methodology, as it stands, does not capture the full extent of what makes for an exceptional learning environment”. Quickly afterwards, University of Pennsylvania’s Perelman School of Medicine jumped on the bandwagon as the faculty dean J. L. Jameson, MD, PhD, announced that the institution will no longer submit data for generating the rankings: the dean noted that although “the USNWR measures encourage the acceptance of students based upon the highest grades and test scores”, the Perelman school “strive[s] to identify and attract students with a wide array of characteristics that predict promise … creativity, passion, resilience, and empathy”.

These decisions have generated a lot of discussion, especially within the open science community. Although both the QS World University Rankings’ and the USNWR’s methodology and algorithms are publicly available and therefore fulfill the transparency aspect of the open science framework, they indirectly discourage other open science practices. For example, it may motivate researchers to prioritize publishing in prestigious and predatory journals for more citations and ultimately, status. Moreover, universities pay high fees to educational consultants to scrutinize the USNWR algorithm and recommend changes that could potentially improve the university’s ranking. While this could be considered a devoted attempt to better the learning environment, that is not where the money is actually poured into.

Importantly, both QS World University Rankings and USNWR receive much of their data directly from the universities, allowing gaps for falsification and even complete fabrication. For example, in the U.S., the responsibility of data submission to USNWR is often put on individual university staff members instead of specialized committees. Not surprisingly, the USNWR have stripped schools such as Claremont McKenna College, Baylor University, and Emory University of their rankings after finding out that they had submitted inflated SAT scores to boost their positions. These universities faced other consequences, too, such as resignations and firings of officials as well as changes to their admission processes.

While the ranking business is notoriously biased, the earlier statement by the EUR Economics student is still valid. Certain institutions may be able to provide us with better career opportunities, sometimes purely because of the network they allow us to form. However, our innate need of ranking things for their quality is challenged with the fact that we do not have reliably quantifiable variables to put into the formula, nor can we strip ourselves of our natural biases. Rationally, we know that it is possible for two universities to co-exist while pertaining different objectives: while some can be exceptional research universities, others may have more valuable contributions to practical applications of knowledge.

Student Initiative for Open Science

This article has been written as part of an ongoing collaborative project with the Student Initiative for Open Science (SIOS). The Amsterdam-based initiative is focused on educating undergraduate- and graduate-level students about good research practices.

SIOS Editors

Author SIOS Editors

SIOS editorial staff.

More posts by SIOS Editors