Press "Enter" to skip to content

Med School Rankings: Do They Mean Anything?

If you read my curriculum vitae, you might assume that I must have a high opinion of the U.S. News & World Report higher education rankings. I earned my bachelor’s degree from Harvard University, #2 behind Princeton in the “Best National Universities” category. My Master of Public Health degree is from Johns Hopkins, the #1 public health school. And my medical degree is from NYU, tied with Cornell and the Mayo Clinic as the 9th ranked research medical school, and likely to move up due to its decision to go tuition-free last fall (though whether NYU will improve its middling primary care ranking is uncertain at best). To top it all off, I even wrote a blog for U.S. News for a year called “Healthcare Headaches.”

I admit that when I applied to college and medical school, I placed a great deal of stock — far too much — in these rankings. (I ended up at Johns Hopkins for public health because it was local, offered a part-time/online option, and I already had connections there.) But as my formal education recedes into the rearview mirror of my career, I find, instead, that I agree with Northwestern University professor William C. McGaghie’s renewed critique of the U.S. News rankings published recently in Academic Medicine.

Dr. McGaghie observed that “the methods used by U.S. News & World Report to rank medical schools are based on factors that can be measured easily but do not reflect the quality of a medical school from either a student or patient perspective.” For example, 20% of the research and primary care ranking reflects “student selectivity,” a combination of incoming students’ mean undergraduate grade point averages (GPAs), Medical College Admission Test (MCAT) scores, and acceptance rates. These criteria may modestly predict academic performance in preclinical courses, but have virtually no impact on the quality of doctors schools produce. They also have real downsides. As Dr. Arthur Kellermann, dean of the Herbert School of Medicine at Uniformed Services University, wrote in explaining his school’s 2016 decision to stop participating in the U.S. News medical school rankings:

“Schools have a perverse incentive to boost their rank at the expense of applicants and the public. Based on the methodology used by U.S. News, a medical school that wants to boost its rank should heavily favor applicants with super-high MCAT scores and grade point averages and ignore important attributes such as character, grit, and life experiences that predict that a student will become a wonderful doctor. A school might also encourage applications from large numbers of people with little or no chance of acceptance simply to boost its ‘selectivity’ score.”

This isn’t to say that prospective medical students can’t have stellar test scores and GPAs and great character and life experiences — I interview several every year. But I wonder how many outstanding future physicians we also prematurely weed out by our slavish devotion to the former metrics. I write from personal experience: my overall undergraduate GPA was 3.4, and my GPA in science prerequisite courses closer to 3.2, which caused my applications to be automatically rejected at several medical schools I applied to — including the one where I’m now a full professor.

From the perspective of a patient or a community, the outcome that matters most for a medical school is how well it fulfills its social mission: to produce physicians who improve the health of the communities it serves, including an optimal mix of generalists and subspecialists; urban, suburban, and rural physicians; practicing physicians, teachers, and researchers. In 2010, Dr. Fitzhugh Mullan and colleagues published the first ranking of medical schools based on social mission, which eventually evolved into the Robert Wood Johnson Foundation-supported Social Mission Metrics Initiative, a national survey that enables dental, medical, and nursing school deans to receive confidential feedback on their performance in 18 social mission areas.

As Dr. Eric Topol wrote in Deep Medicine, the forthcoming integration of artificial intelligence (AI) into medical care over the next few decades is another good reason to change the way we evaluate medical school applicants:

“Are we selecting future doctors on a basis that can be simulated or exceeded by an AI bot? … Knowledge, about medicine and individual patients, can and will be outsourced to machine algorithms. What will define and differentiate doctors from their machine apprentices is being human, developing the relationship, witnessing and alleviating suffering. Yes, there will be a need for oversight of the algorithmic output, and that will require science and math reasoning skills. But emotional intelligence needs to take precedence in the selection of future doctors over qualities that are going to be of progressively diminished utility.”

In another Academic Medicine commentary, Dr. Melanie Raffoul (a former Georgetown health policy fellow), and colleagues offered a starting point for medical and other health professions schools to “meet the needs of tomorrow’s health care system.” Among other things, they proposed 1. incorporating emotional intelligence testing into admissions criteria; 2. specifically recruiting from rural and underserved settings; 3. “consciously reaching out to disadvantaged and underrepresented students at the primary and secondary education levels”; 4. establishing community partnerships to develop pools of eligible trainees; 5. bridging gaps between health care and public health; and 6. supporting health professions education research. Ironically, the most effective way to motivate schools to make these wide-ranging changes might be for U.S. News to weigh these factors heavily in next year’s rankings. If that happened, my current dim view of the rankings would change dramatically.

Kenneth Lin, MD, is a family physician who blogs at Common Sense Family Doctor.

This post appeared on KevinMD.

2019-05-06T00:00:00-0400

last updated

Source: MedicalNewsToday.com