CDI Practice Has Evolved, but our Metrics Haven’t: Why Is CDI Different from Other Departments?

Original story posted on: August 31, 2020

In my first article of this series, I outlined how most clinical documentation integrity (CDI) programs began in response to either the creation of the DRG reimbursement system under the Inpatient Prospective Payment System (IPPS) or implementation of the MS-DRG system, which expanded the impact of documentation and coding practices on hospital reimbursement. In this article, I’ll discuss why CDI programs should become CDI departments, and why such departments need to create metrics that reflect the current work of CDI professionals. 

When I started my career in this field as a CDI manager in 2008, I joined a recently established program with five specialists for a 709-bed hospital. We were woefully understaffed, to say the least, but the organization was not sure how dedicated they wanted to be to CDI  (which used to be called “clinical documentation improvement,” rather than “integrity.”). So we were classified as a “program” rather than a “department.”  

Like most CDI programs started in response to the implementation of the MS-DRG system, we were under the umbrella of health information management (HIM), also known as “medical records.” In those days, CDI was usually considered an extension of coding, so such programs were often managed by those with an HIM or coding background. Unlike every other hospital function, CDI was not a requirement. CDI was and still is a supplemental business function. 

Yes, at one time CDI programs were considered expendable, and now most hospitals would never dream of eliminating them. You see, most hospital functions, such as medical record services, utilization review, and quality assurance, are mandated by the Medicare Conditions of Participation. This is why there may be only one person dedicated to monitoring quality at a small hospital, but it is still a “department,” compared to a CDI group that may have a staff of 10 and is still considered a “program.” Not only are hospitals required to have departments that perform these functions, but there is also federal guidance, and an overview of what is required by each of these departments.

The same is not true of CDI. Although we heavily lean on areas like HIM, and more recently, quality assurance, the Centers for Medicare & Medicaid Services (CMS) do not have any defined expectations for CDI; so much of how we measure performance is borrowed from coding. I’ll speak more to this point later in the article, but the key point for the moment is that not too long ago, CDI programs were thought to be a temporary solution during the transition to MS-DRGs, until coding practices caught up to revenue opportunities, as they did with the DRG reimbursement system.  

In fact, it was very clear when I accepted my manager role that the CDI program was something of an experiment, and that we were to demonstrate sustainability through a return on investment (ROI) – or I would be out of a job, as would the rest of the team. Now, it wasn’t clear how much ROI was required, but it needed to be at least enough to cover the expenses of the program. Obtaining proof of ROI required me to create elaborate spreadsheets, whereby I tracked the impact of every query so I could show how much incremental revenue (found money) could be attribute to the program.

Looking back, what I find ironic is that most CDI programs were governed by the same metrics applied to coding, with a focus on productivity (e.g., number of records reviewed); coding accuracy was translated to a reconciliation process, comparing the working DRG to the final coded DRG. But coding departments were not historically required (and some would argue, nor should they ever be, due to billing compliance regulations) to demonstrate a ROI. Coding is a necessary business function, wherein bills need to be “dropped” in order for the hospital to be paid. “Clean” bills, those without technical errors that are paid upon first submission, are the goal, to expedite receipt of payment. Coders are required to meet productivity standards and demonstrate coding proficiency through accuracy, as determined by routine audits by their supervisor or a third-party vendor. The success of the coding department is often measured by the length of bill hold, or how many days post-discharge, on average, pass before a claim was billed. These are straightforward metrics, and the work of coders clearly aligns with them. 

So, what coding tasks often derailed productivity and increased the length of bill holds? Querying. Consequently, coders did not usually have query volume metrics (e.g., the expectation for them to query a particular number of cases). Querying slowed down the coding process. Coding queries increase bill hold times, because the query isn’t even submitted until a few days post-discharge, and then the provider needs time to respond (if they do respond). 

Enter the CDI professional, who became responsible for finding query opportunities concurrently, so the coder has an unambiguous health record that can quickly be translated into ICD-10-CM and ICD-10-PCS codes, thereby reducing bill hold times. Oh, and wouldn’t it be great if CDI could also focus on capturing complications and comorbidities (CCs) and major CCs (MCCs) to appropriately enhance hospital revenue? CDI programs do, after all, need to demonstrate a ROI. Fast forward to 2020, and most CDI programs are still expected to demonstrate a ROI, which is often measured by CC/MCC capture and/or case mix index (CMI). A 2018 article found that “87 percent of hospital financial officers reported that case mix index improvement was the largest motivator for CDI adoption, because of its potential to increase healthcare revenue.” Even though most CDI leaders are trying to move away from our revenue enhancement origins, most hospital administrators still want CDI to “show them the money!”

Showing the impact of CDI efforts often results in tedious workflows, wherein either the professional who initially reviewed the case or a dedicated position, like a DRG validator,) reconciles the working DRG established by CDI against the final billed MS-DRG to see if a query increased the reimbursement associated with the particular claims. Where this can get a little crazy, for lack of a better word, is when a query adds CCs or MCCs to the claim, but subsequent documentation also resulted in additional CCs or MCCs. Some CDI leaders will consider the initial CC or MCC nullified by the subsequent CC or MCC, while others continue to attribute the increased revenue to CDI team. 

Why do I think this approach is problematic? Well, it doesn’t consider the maturity of the CDI department and the impact of physician education. In the early days of CDI, it was much easier to demonstrate the impact of such efforts because the orchard was rich with low-hanging fruit. But much of that low-hanging fruit has been picked by mature departments. How do I know? Because we are starting to experience more and more shifts across CCs and MCCs. Yes, some of this is attributable to changes in ICD-10-CM, which brought more specificity to the code set, but I’ve been in CDI long enough to remember when acute renal failure was an MCC. Remember in the last article, when I discussed why CMS implemented the MS-DRG system: because the DRG system was “maxed out” due to the prevalence of CCs among the Medicare population? Better documentation and coding are also increasing the prevalence of other conditions, like acute renal failure. As a condition becomes more prevalent, it starts to represent the “average” Medicare patient, and the associated cost is absorbed into the relative weight. Instead of the hospital getting increased revenue when treating a sicker-than-average Medicare patient, all Medicare patients are becoming “sicker” as we increase their acuity through better documentation and coding – and that becomes the new average patient. 

In my opinion, when we focus too much on if a query increased revenue, we are missing all the intangible impacts of CDI, including the value of changing a provider’s documentation habits. As a CDI manager, I valued the time spent educating a provider as more beneficial than the time spent conducting one more initial record review. Consider this: if I change a provider’s documentation behavior, I am influencing many records, but I won’t be able to prove the ROI impact of education through current CDI metrics that rely on query volume and capturing the impact of each query.

Let’s look at this another way. If you have a CDI professional who is asking the same query of the same provider for six months or more, is that professional really effective? Wouldn’t I see a greater ROI if I change the documentation behavior of that provider, rather than making them query-dependent (e.g., they only change the documentation after being queried to do so?)  Doesn’t it seem more efficient and effective to impact the provider’s future documentation on every record rather than one record at a time? And if you follow my thought process here, why would we measure ROI on such a granular level as per claim? Wouldn’t it be more beneficial to measure how the reporting of unspecified diagnoses has changed over time? For example, maybe a better demonstration of ROI is the prevalence of certain codes being reported, like those in the I50 range for heart failure, compared to I50.9 for heart failure, unspecified? Or better yet, more impactful change may come from leveraging your quarterly Program for Evaluating Payment Patterns Electronic Report (PEPPER) data to monitor the prevalence of simple pneumonia, compared to the prevalence of respiratory infections. Isn’t the goal to improve documentation? So wouldn’t we want to measure the outcome of documentation (e.g., the codes reported in claims data) to see if we are reporting more diagnoses classified as CCs and MCCs over time? Or more diagnoses classified within the Hierarchical Condition Categories (HCCs), if our focus includes risk adjustment? 

The role of CDI has evolved beyond its humble beginnings, and continues to expand into new areas with new responsibilities, yet we are using the same metrics that were popularized more than 13 years ago, when the industry was mostly focused on capturing CCs/MCCs and productivity. Even with the introduction of technology into coding and CDI, many of us still have to manually calculate performance metrics like ROI. Why? As the profession has matured and evolved, why are there so many CDI “programs,” and why are many such “programs” still required to measure their ROI? The value of CDI has been established by its prevalence within the short-term, acute-care setting, and their infiltration into other healthcare settings, like outpatient and post-acute care. 

I believe that CDI is here to stay, as long as CMS continues to refine their payment methodologies; with the rising cost of healthcare, the industry seems safe for now. Yes, we may have suffered a few setbacks with COVID-19, but I think we will rebound. The same set of circumstances that led to the success of “CDI programs,” the ability to adapt and evolve, must translate into how we measure the success of “CDI departments.” Let’s move away from a focus on ROI, because today’s CDI departments do so much more than just capture CCs and MCCs. As an industry, let’s be really honest about the case for CDI, which may be revenue enhancement and may or may not be quality assurance. Whatever route your department takes, be sure to “measure what matters” and develop key performance indicators that reflect the day-to-day role of the staff. 

If we think about it, is it really important to measure how many reviews staff perform each day? Yes, staff need to be held accountable, but I have also engaged in so many discussions with other CDI leaders about the need to balance quality with quantity. When we emphasize the quantity of reviews, does the quality of those reviews suffer, especially when it is becoming harder and harder for mature CDI departments to find documentation improvement opportunities related to CCs and MCCs? As it should be, the CDI department is educating providers and changing documentation habits. Today’s reality isn’t growing CMI through capturing more CCs and MCCs; it is maintaining a hospital’s operating margin (profitability). Furthermore, as we expand our focus into areas of quality, shouldn’t we consider incorporating some of their metrics? When a CDI department has a focus on quality, shouldn’t it be measuring the impact on expected outcomes, while quality focuses on actual outcomes, since most quality metrics use a formula of expected over actual? 

The bottom line is that the CDI industry is diverse. We may have started in the same place, but we have all grown in different ways and fulfill different missions within our organizations. Yes, we focus on how documentation impacts the coding of the record, but that is about all we have in common. As an industry, we need to move away from outdated metrics with a “one size fits all” approach, so we can adopt standards that reflect our diversity. 

The next article will discuss the pitfalls with using CMI as a key performance indicator (KPI). As many experienced CDI managers know, CMI is an imprecise measure of performance, and it is becoming even less accurate as CMS further integrates value-based reimbursement strategies and trials a variety of innovative payment mechanisms that further erode the Medicare fee-for-service population. Additionally, the article will examine how the COVID-19 pandemic highlights many of the weaknesses associated with CMI as a KPI. 

Cheryl Ericson, RN, MS, CCDS, CDIP

Cheryl Ericson, RN, MS, CCDS, CDIP is a clinical program manager with Iodine Software.  Ericson is recognized as a CDI subject matter expert for her body of work which includes many speaking engagements and publications for a variety of industry associations.  She has helped establish industry guidance through contributions to white papers and practice briefs including several American Health Information Management Association (AHIMA) Practice Briefs in the areas of Clinical Documentation Improvement (CDI) and Querying. Ericson is a current member of the AHIMA CDI, Quality and Revenue Cycle Practice Council and ACDIS CCDS Credentialing Committee.  She is a past member of the AHIMA CDI Practice Council and ACDIS Advisory Board.  She was a contributor to the initial AHIMA CDIP exam and continues to contribute to the ACDIS CCDS exam. 

Latest from Cheryl Ericson, RN, MS, CCDS, CDIP

Related Stories

  • Role-Based Versus Task-Based Clinical Documentation Integrity: A Major Determinant of Operational Performance
    The CDI profession has failed to effectively articulate its value in the revenue cycle. Role-based versus task-based business processes can play a major role in driving operational performance of any department or organization. From a business perspective, each department of…
  • Clinical Documentation Integrity: Rebranding and Repurposing
    CDI will be achieved through an unrelenting focus upon attainment of clinical documentation excellence.  Clinical documentation integrity (CDI) programs are highly ingrained in most hospital and healthcare facilities. Career opportunities abound for CDI specialists who check job board sites and…
  • New Data Supports Need for CDI Improvement
    Improper DRG payments in 2019 demonstrate the urgency for CDI leadership to address issues. The 2019 Medicare Fee-for-Service Supplemental Improper Payment Data Report was recently released, showing marked improvement in the Medicare improper payment rate from 2018. This year’s overall…