Press "Enter" to skip to content

An EHR as Good as an Abacus: Is That Too Much to Ask?

One of the things that one of my sisters collects, among many others, is abaci.

(Yes, that is the plural for abacus, and although abacuses is also acceptable, abaci is much more fun to say).

She has lived and worked and traveled all over the world, and picks these up in markets wherever she goes, whenever she sees one.

They are quite beautiful, and the underlying math and logic is brilliant in a centuries-old technology, used for tracking and tallying everything from bushels of rice to acres of land.

Just this week I had the thought that maybe this device could turn out to be a better way to help take care of our patients than the electronic reporting system we have in place at the moment.

Getting data out of the electronic medical record can be as hard, if not harder, than clicking data into it.

We have recently been requested by the state to provide data on the number of patients with certain medical conditions that we see at our practice, apparently as part of a mandate to show that we are appropriately tracking these patients and getting them in for the care and follow-up they need.

We put in a request for a report from the data and analytics team within our electronic medical record, listing the medical conditions and an additional inclusion criteria that they be followed in our practice for primary care, even if not for this particular condition.

Turns out that the state wants to know not just whether we care for them for this condition, but also if they just happen to have this condition but we see them for something else entirely.

Regardless of whether we think the logic behind this is correct, we knew we had to provide them this information, so the request for the report went in.

When we got the report back, it included a large Excel spreadsheet, with several different pages of results, listing names and medical record numbers, and also included the name of the person who the system listed as the PCP for these patients.

In the email that the folks who generated this report sent back to us, they said that their report had generated multiple answers, differing by up to an order of magnitude, and they said to us “Which of these is right?”

I thought that’s what we were asking you?

I took a stab at one of the spreadsheets, looking up the medical record numbers in our electronic medical record, then checking to see if they truly matched what we needed.

I went through the first dozen patients on the list. Several of them had never been seen in our practice, several had been admitted to our hospital but then never followed up in our practice, there were several duplicates, several who do not have the diagnosis in question, and, finally, one person who fit the search criteria to a T.

Clearly if are going to use these reports to help us study patients and help take better care of them, we need good data.

We need accurate recording, good data in to give us good data out, and we need to know that we can trust the numbers. We may end up making clinical decisions, about things like allocation of resources and group interventions, or even doing important research, based on the accuracy of what we get from these kinds of reports.

They are used to do population health interventions, such as helping close gaps of care, or recognizing patterns of healthcare inequity or implicit bias.

While I have no desire to become a coder and build these reports, I’m hoping that we can improve the relationship between those who write these reports and those of us trying to take care of patients.

When we request these reports, we think we know what we want, and those doing the program think they know what we want, and we often have multiple iterations, email chains going back and forth as we try to figure out what we need, and they try to figure out how they need to build the right report.

I’m not surprised that it’s a complex process, that we are in fact expecting a lot from these systems to be able to give us the information we will need.

But if we’re going to use this complicated electronic medical record to effectively improve the care of our patients, we need to have confidence that it’s going to give us the right answers we need when we need them.

I’m not suggesting we go back to the days of paper patient charts, which would require someone manually sifting through hundreds of pages of hundreds of charts to find out if a patient had a certain diagnosis and had been seen within a certain timeframe.

But we all hope that we can be confident in the output of a system that we are putting so much into.

If not, I’m all for going back to sliding beads around on one of those ancient counting devices, clickety clack, clickety clack.