In a word:  nope.  I've performed many healthcare quality improvement projects, and, over time, commonalities emerge across systems.  Now, I'm painting with a broad brush here clearly, yet many hospitals and healthcare systems in the United States have no data, little data, or poor quality data on which to base decisions regarding quality.

Seems incredible, yes...I know.  Let me share a story:

Once upon a time (more than eight years ago now) a hospital wanted to know if it was discharging patients in a timely fashion.  It wanted to decrease the amount of time patients spent in the hospital where it could do so safely.  It had recently transitioned to an electronic medical record, and was able (with a click!) to print out reports that would share Average Length of Stay and other impressive, typed, official-looking printouts.

Well, if it's typed by a computer, it must be useful, right?  The organization struggled (for years) to meaningfully decrease that average length of stay.  Eventually, they used a Lean Six Sigma quality improvement project and decreased that length of stay by 14 hours...that's a big deal.  Here's what happened and what we learned about the data:

  • the data did NOT represent the system
    • Amazingly, that typed print out did not demonstrate how the hospital was typically performing.  Why not?  This was because, when we later graphed the distribution, we learned that the distribution of length of stay was non-normal...there was a tail of patients who stayed MUCH longer in the hospital and they pulled the average WAY upward.  Until those patients were addressed, interventions to decrease the Average Length of Stay were often misguided and wasteful.  (By the way, want to know what to do with non-normal data and what they mean?  Look here.)
  • the data were NOT timely
    • If you're in healthcare, I bet you aren't surprised when I tell you that the data we were looking at were months old.  The problem with that? The group would make an intervention and often be unable to sit and watch for months until the data came in on whether it worked.  This lead to interventions that made us chase our tails routinely.
  • the data were NOT representative of the actual process performance
    • Interestingly, the group had never settled on an operational definition of length of stay.  Was length of stay the time from when the order was placed into the computer to admit until the time a discharge order was placed into the computer?  (That's what the printout was really telling us.) Or was length of stay the time from when a patient physically arrived into a bed until the time they physically left the hospital and the bed was ready for the next patient?  Whatever the definition was, the group needed to select one and keep it constant throughout interventions.  In the end, it seemed to make more sense to accept a definition that involved the patient actually using a bed (meaning time until bed was ready for next patient) rather than the endpoint that the computer had made it very easy to measure.  The computer didn't capture what we cared about!

I'm one of many who calls for improved data for decision-making in healthcare...after all, these data deal with people and we are talking about their health!  Don't we as patients deserve hospitals armed with good data to improve their processes?  Fortunately, the tools of Lean and Six Sigma have a built in, often prospective, data collection directly from the process at hand.  This lets us avoid all of the pitfalls we typically see when data that are months old, and non-representative of our performance, are used to make decisions.

In the end, bad data affects your bottom line.