TABLE OF CONTENTS
Guest speaker
Introduction
Transcript
Retrospective risk adjustment involves several administratively heavy processes, from chart retrieval to coding to supplemental data and submissions. The ever-changing regulatory environment requires continual updates in processes and technology.
Join expert Greg Pastor to discover ways to streamline retrospective risk adjustment processes and develop a strategic, multi-faceted approach to addressing industry changes.
Guest speaker
Greg Pastor
Managing Director of Risk Adjustment Operations
Greg is the Managing Director of Risk Adjustment Operations, leading a team of over 350 risk adjustment professionals to drive client execution, customer value, and plan revenue optimization.
Host: Today, we’re talking with Greg Pastor about Why Retrospective Risk Adjustment Is Administratively Heavy. Greg is the Managing Director of Risk Adjustment Operations, leading a team of over 350 risk adjustment professionals to drive client execution, customer value, and optimize plan revenue. Welcome, Greg.
Greg: Hey. I’m excited to be here.
Host: Greg, there are a lot of factors that contribute to the administrative load inherent to retrospective risk adjustment. Diving in, what’s new in retrospective risk adjustment that may be adding to this load?
Greg: Yes, retro risk is administratively heavy, there’s no question. One of the primary reasons is the regulatory compliance piece that requires meticulous documentation and adherence to specific methodologies— and it’s always evolving. This year, in 2024, there is a change in payment models the government is using for Medicare Advantage risk adjustment. CMS is keeping the old model, the V24, and introducing a new one, the V28. Over a period of time, they will phase out the old model. For the 2024 payment year, there’s a blending of different models. So, you have to track the status of gaps in both models and ensure they are captured in each one.
Host: In trying to understand how this will affect plans, what are the fundamental differences between the two models?
Greg: The first notable difference is an overall reduction in the number of diagnoses targeted, an increase in the mapping of those diagnoses to the remaining conditions, and then, there are some new conditions.
Host: What implications does that have for retrospective programs?
Greg: It could diminish the return on investment in retrospective risk adjustment if you don’t understand how the new model works and how to target the chase list using that new model.
Host: How are the chase lists different between the two models?
Greg: From a condition standpoint, there are fewer diagnoses—it went from around 9,000 to 7,000 diagnoses. However, there was an increase in the categorization of those conditions. The new V28 requires a lot more specificity to capture a member’s condition. Whereas you used to be able to get that information from a primary care physician, now you might have to get the information from a specialist. The strategy in the targeting has changed. It’s now more complicated because it’s a new model, and then you have to consider the complexity of running two models at the same time.
Host: How do you determine which conditions to capture with one model versus the other?
Greg: That’s where a strong strategy comes into play. For example, in 2024 payment year, a plan bases 2/3 of the payment on the Version 24 model, and 1/3 of the payment will be based on the Version 28 model.
Host: Is the strategy unique to each plan based on their objectives and member population?
Greg: Yes, exactly. For example, plans might adjust volume of retro chases targeted as part of their strategy as well as a different mix of providers targeted than under v24. In addition, plans might also emphasize prospective risk adjustment programs more than retro to ensure that clinicians are managing populations differently than they have done in the past.
Host: What does this mean for the technology? Do you have to reprogram the AI to capture the new model?
Greg: Yes, we are constantly making investments to support industry changes. We have trained the algorithms and adapted the processes to identify conditions appropriate to the V28 model, and we’ve ensured the algorithms are attuned to discover the best source of evidence because some of the sources for model V28 have changed because of the increase in specificity required.
The v28 model has only includes Major Depressive Disorders with Depression Not Otherwise Specified not longer risk adjusting. Another example includes the new condition of Anorexia Nervosa; just non-specific Anorexia as a diagnosis does not risk adjustment. This might result in a focus on psychiatrists that are more likely to evaluate and document these conditions at appropriate level of specificity versus a primary care physician.
So there’s new complexity in the model, and that’s where the value of having expert guidance is going to support plans to identify that level of specificity and support it with the correct evidence. And, of course, plans are going to need guidance in executing the strategy of running two models simultaneously.
On the retrieval side, there are still a lot of post-COVID challenges that span across provider health systems. While patient care is back to pre-COVID levels, the economics on the provider side still have not bounced back. A lot of offices had cutbacks, so there’s not always a point person to provide access to medical records for retrieval. And, in a lot of cases providers are outsourcing their records management to Release of Information vendors. It’s important that plans make retrospective risk adjustment as easy as possible on providers.
Host: How do plans do that?
Greg: By offering providers different ways to communicate and exchange medical records, whether that’s through a remote EMR, web portal uploads, fax, mail, or an in-office representative to do an on-site collection.
Host: I can see how different provider’s offices would prefer different routes for medical record retrieval depending on the operational structure of their office. That is great to offer multiple options to meet the provider’s diverse needs. Going back, we talked about the administrative challenges associated with continual regulatory changes and the ripple effect that has on developing chase lists, and the challenges associated with retrieval. What other administrative burdens do plans face with retrospective review?
Greg: Yeah, the next piece is coding, managing updates, ensuring accuracy and completeness standards, and proactively preparing for audits. Last February, the government promulgated the Final Rule of Risk Adjustment Data Validation Process, or RADV. The key rule is it shifted the fee-for-service adjustor and extrapolation methodology. Ultimately, the RADV rule creates increased pressure on plans to ensure their data is accurate and, therefore, reimbursement is accurate. Most diagnosis data comes out of claims, so it gets a diagnosis code and a billing code, and they submit it. So, it hasn’t gone through a thorough review at this point. CMS says, “We understand that there is an inherent error rate in the underlying data, but it doesn’t matter. We’re going to measure a health plan’s performance assuming there’s a 0% error rate, and then, we’ll extrapolate those errors across all of plan payment.” From a plan standpoint, this creates a liability and pressure. I’m anticipating that RADV audits will ramp up in this upcoming year.
From a retrospective medical record review standpoint, it increases the complexity for the plans to have to monitor their vendor's performance to make sure the results are accurate and complete so that when they get submitted, it doesn’t create additional liability for them in the future.
As a vendor, this is something we’re really sensitive to. That’s why we have very tight processes around quality assurance and the accuracy of the codes being identified. We have a multi-step process that combines NLP and suspecting logic to capitalize on any missed opportunities and at least 2 medical coder reviews. This is how we’re able to maintain a 95% accuracy and completeness standard and reduce the liability risk for our health plan clients.
Ultimately, it’s the plan that is liable to CMS, so it’s important that there’s a good vendor relationship to support them.
Host: How do market dynamics and competitive pressures influence risk adjustment strategies?
Greg: This is where each plan has to reconcile its tolerance for risk, choose its preferred model for coding based on compliance and input from legal staff, the financial department, medical management, and a variety of stakeholders. Are they going to structure their strategy around blind coding, claims match coding, open HCC coding? We have our views about it, but we help each plan customize its strategy so all stakeholders are comfortable with the model.
There’s a very competitive nature to this business. Plans can make decisions about their coding staff, whether it’s going to be domestic or offshore or some combination of both. There are benefits to both sides. Offshoring has a lower cost model, and CMS has different requirements if the work is done offshore. So it’s important to make sure vendors are compliant with both requirements.
Host: Can you elaborate on the requirements for offshore models?
Greg: Yeah. First, Plans have to provide an attestation for offshore contracts to ensure there’s appropriate oversight and protections for beneficiary information. They want to know if the data is physically leaving the U.S. or just viewable remotely. They want to know the vetting process for employees who access the data, and they want to know the frameworks that are set to protect the data with continuous monitoring of performance and data usage.
Host: What kind of technology are you currently using to reduce the administrative burden on risk adjustment processes?
Greg: Again, the chase lists are critical here and the technology that supports it. We used to use rules-based analytics, but now, we’ve moved to machine learning AI models to assist with the identification of different probabilities and conditions. We want to quickly identify certain types of charts and train the algorithm to know the best location to retrieve the chart from. You know, it can be complicated, especially with provider consolidation. If it’s a large health system with satellite offices all over the place, the record might not be in Dr. Smith’s office; it might be in a central office.
Host: So, the AI will skip to the chase, Lol.
Greg: Exactly. Another area where there’s a lot of administrative burden is EDPS, Encounter Data Processing Submission. Let’s say we’re working on behalf of a plan; we’ve successfully retrieved the records, we’ve coded and gotten the results into a supplemental file to either send to the plan for CMS submission, or we send it to CMS on behalf of the plan.
There’s a lot of admin burden in the management of supplemental data and downstream processes.
It’s always prudent to stay ahead of CMS and OIG regulations so you don’t get blindsided when the compliance standards change. We’re already looking to the future in anticipation that CMS may eventually require supplemental data files to be linked to the original claim that was representative of that activity. We suspect this is going to happen because they’re encouraging it.
Under the current rule, it is completely compliant if you don’t link the data. But, we feel the industry and regulation is moving in that direction, so we’ve already made enhancements to our claims-linking process so we are ready to meet future obligations.
Host: I think the take-home message for our listeners today is that retrospective risk has a number of administratively heavy processes from chart retrieval to coding, to supplemental data and submissions. And then all the systems get a wrench thrown in when there’s regulatory changes. New processes need to be developed, technology needs to be updated, algorithms reprogrammed. It sounds like the best strategy is to partner with experts to stay ahead, anticipate the changes, prepare, and maintain an adaptable mindset.
Greg: Yes, you summarized that well.
Host: Greg, thank you for the great conversation, loaded with insight.
Greg: You bet.
Host: If you enjoyed this episode, follow on Apple or Spotify, and share it on LinkedIn with your colleagues.