When reimbursement is tied to performance (as it currently is in the Physician Quality Reporting System (PQRS) 2016 reporting season), how do we navigate between what is "right" and what will get us paid?
Registries are in high gear now reviewing boatloads of data as providers finalize their 2016 PQRS submissions. What is the best way to honor the supposed Center for Medicare & Medicaid Services (CMS) "spirit" and intention of the program (to improve quality and patient care) and still receive the maximum reimbursements to which the provider believes s/he is entitled to receive for services rendered? As a qualified registry, that has been vetted by CMS, we walk the fine line every day of providing a balanced response to this question.
CMS encourages registries to NOT advise providers to "cherry pick" their best patients or use just the "easiest" measures. Yet, everything about the structure of the PQRS program (and especially now with the Quality Payment Program (QPP)) encourages this very sort of behavior. It's a challenged system attempting to encourage the best of ideals, but fails tragically simply because when money is tied to performance, "spirit & good intentions" are almost always lost.
Time and time again CMS asks registries to compel providers to submit measures and encounters that are a "true representative sample" of their data (as if that were an achievable feat)... And yet, this is what we see:
- Use of measure groups which establishes a mechanism to report whereby the provider only needs to submit 20 encounters for the whole year. Well then, riddle me THIS, who WOULDN’T pick their 20 best patients???
- For those deciding to submit nine individual PQRS measures, the mantra from CMS is to "submit early and often." This too seems contradictory if only because they wait to open their EIDM submission portal until January of the following year. Thus, a timeline encouraging last minute submissions and providers looking back to simply find their best data that meets requirements.
This is a challenged system with inconsistent messages.
There are countless other illustrations of what is wrong with the program, so what is the answer in terms of learning how to play the game with as much integrity and honesty as possible? Make no mistake; it is a game. It's just a matter of how to play with the best intentions. Any CMS provider must accept s/he is in the game and make the best of it.
Here is a practical solution to those still struggling with 2016 measure selection.
Measures Groups are a gift and so easy to only have to report on 20 encounters. Instead of just finding 20 of all "performance met" encounters and cherry picking from one or two months of data, how about going back to the beginning of the year and randomly picking 5 patients per month? A provider would end up with 60 encounters and probably data more representative of true performance. It's more than 20, and it's over time. Again, fair and balanced. If your performance is truly high, then it will be consistently high throughout the year on a more random sample.
Pick measures that actually relate to your specialty. Pick them early and consistently upload encounter data allowing you to make improvements throughout the year. The idea is that if January to March provider performance is substandard (i.e., indicating "performance NOT met") for lots of encounters from January to March, a shift in clinical quality action CAN move outcomes to meet the performance on these measures for the rest of the year. This means behavior and clinical quality actions actually have to change.
There are no perfect answers to this dilemma, but as you finalize your 2016 data, please know this: Registries and CMS are mandated to conduct both random and targeted audits. If you are a high performer, expect that at some point you will be audited to ensure what you say you did is what you really did. Make certain your high performing data is a true reflection of your performance and your patients’ positive health outcomes.