TABLE OF CONTENTS
Guest speaker
Introduction
Transcript
As stewards of healthcare, health plans are responsible for managing the care of its members. This includes working with providers to capture member conditions accurately and comprehensively via medical charts and coding. This improves member outcomes and optimizes the plan's risk adjustment revenue which ultimately reduces member costs.
The scope of a prospective risk adjustment program is to account for historical member conditions, and identify and close gaps on suspected member conditions. Many plans attempt to close as many prospective gaps in a year as they can and whatever they cannot close in that year is sent to retrospective programs. This is an unsophisticated, costly approach that tends to over-suspect and send providers weak evidence which diminishes provider trust and engagement.
Based on CMS guidelines, the prospective format has very specific language requirements for how providers document member conditions. Plans cannot go back in time and change how its providers code and document a condition, thereby making retrospective programs administratively heavy.
AI machine learning models offer a higher level of sophistication by scanning the clinical evidence and assigning a probability score to each piece of evidence in support of a suspected member condition. This saves administrative time and offers providers a high level of trust that the data sent via CDI alerts is compelling and indicative of a condition. When providers have confidence in the data, it increases their participation in prospective programs and leads to more gaps closed.
Tune in to this episode to learn more about AI suspecting program logic and prospective programs.
Guest speaker
Elizabeth Burreson
Risk adjustment analytics technology expert
Elizabeth Burreson is an expert in risk adjustment analytics technology and has 20 years of IT data management experience, managing product portfolios and backlogs.
Host: Today we are talking with Elizabeth Burreson, about The Science of Predicting Member Conditions. Elizabeth works in risk adjustment analytics technology and has 20 years of IT data management experience, managing product portfolios and backlogs. Welcome Elizabeth.
Elizabeth: Thank you. I’m looking forward to the conversation.
Host: Elizabeth, let’s start by discussing what intrigues you most about your work in risk adjustment.
Elizabeth: Well, I would have to say it’s the process of identifying member conditions and how that affects member outcomes, efficiencies, and programs that ultimately improve member risk scores and a health plan’s risk adjustment revenue.
Host: It’s a complicated topic with so many data inputs. Tell us about what variables go into identifying member condition gaps.
Elizabeth: There are historical member condition gaps, conditions we know the member has. The evidence has been submitted to CMS and accepted by CMS on a response file. The health plan will get reimbursed for the cost of this care, and the member and the member’s provider will receive follow-ups on the condition. But, what won’t get reimbursed or followed up on are conditions that weren’t coded or documented properly or, for whatever reason, didn’t make it from the chart into a claim. But, we can see there are indications that a member condition exists. This is called suspected member conditions.
Host: So, what’s the process for identifying and documenting the suspected member conditions?
Elizabeth: Yeah, so ideally, you want to have member conditions—whether it’s historical or suspected—you want to address this in a prospective program. That means the gap is open this year, let’s say 2023; we have to get it documented with a 2023 date of service. To do that, we include the members and their conditions in prospective programs by presenting the information to providers. For example, “John is going to be in your office next Tuesday. Can you please make sure you address the diabetes, which we know John has, but we also see some evidence that suggests that John may also have COPD.” We need the provider to confirm this so we can capture the member conditions accurately and comprehensively. This improves the member’s outcomes and the plan’s risk adjustment revenue which ultimately reduces cost to the member.
Host: How do you get the provider’s full participation in this process as a partner with the health plan?
Elizabeth: The first thing is curating the data and AI programming so that it’s detecting good indicators of suspected conditions. The providers need to trust and have confidence in the evidence presented from the plan level. The data has to indicate a real need. You know, “We can’t say we think Jane has diabetes. That’s not sufficient. We have to say why we think Jane has diabetes and provide supporting data.
The next part is the delivery of this information via CDI alerts. It has to be in a format that suits the provider and the way their practice is set up. So, I think the plans that offer multiple options for providers are going to have higher levels of success with provider engagement. Some providers are going to prefer traditional CDI alerts that are printed PDFs and delivered via field representatives. While other providers will prefer to have electronic CDI alerts that piggyback onto their EMR workflows. In any event, meeting the provider where they are in their practice is going to improve their willingness to participate and partner with the plan.
Host: What are some examples of evidence for suspected conditions?
Elizabeth: Our suspecting program logic scans for dates of service in pharmacy claims, medical claims, and lab results. Then, it identifies LOINC codes, which are codes for labs and clinical test results.
Maybe that’s a recent A1C lab value. Or, maybe we had a claim with an ICD code that maps to diabetes with a recent date of service. Of course, not all evidence is allowed on alerts. You can’t use an ICD code from a co-morbidity and ask the provider to evaluate the member for diabetes. But you could use a GPI, the code of a prescription, and the date it was filled as evidence for a suspected condition.
We can use other forms of evidence for suspected conditions, but they’re not always allowed on CDI alerts. For example, if a member joins a plan and fills out a health risk assessment, they might be asked to identify their conditions. We can’t use this information as clinical data for suspected conditions. But, we can use it to target a member for an outreach initiative like enrollment in an IHA program. We can also use it to request a retroactive chart review. Even though retro is more expensive, but the value is that once you confirm the condition and code it from the chart, that suspected condition now becomes a historic condition. And historic conditions are always allowed in prospective programs.
Host: So this validates the additional expense of the retroactive review because, moving forward, you’ll have it in a prospective program. So, what makes a retroactive review so much more expensive than a prospective one?
Elizabeth: The prospective format has very specific language for presenting the member's condition, and the providers have to follow a specific format for their response. This is based on CMS guidelines on what can and cannot be submitted. Retrospective is administratively heavy because we can’t ask the provider to document in any specific way because it’s already happened—what’s in the chart is in the chart. We don’t have any control after the fact. I don’t know how Dr. Smith coded the chart for John when he saw him in the office last September. I can’t go back and change how he did that. Once we retrieve the chart, the medical coder who’s been trained to interpret these things can say, “It looks like X, Y, or Z, but there’s nothing here that I’m allowed to capture.”
But for prospective CDI alerts—we can control the language and the structure of the response, so it forces the providers to document in a submission-compatible way.
Host: Let me ask a question, if I may. If you have a lab and a prescription to match the lab, then why wouldn’t you have a diagnosis?
Elizabeth: Yes, logically this would make sense. But, not all providers understand the value of coding all ICDs on a claim. Other providers submit the claim in a way that CMS won’t accept.
I was at RISE National this year, and one of the speakers used a great metaphor to explain this situation. It’s called the ‘leaky hose’. Think about risk adjustment and your initiatives to identify member conditions down at the nozzle of the hose. But if your data is falling out along the way because the hose is leaking, then how effective can your programs be? An example of a leak is providers. If you have a provider that only sends in 1 ICD code on a claim. This can be rectified with provider outreach or provider education, or maybe the provider doesn’t realize that their claims processing or billing service is dropping other ICD codes. This fallout of data along the way makes it more difficult for risk adjustment.
Host: Can we expect any upcoming changes from CMS in regard to risk adjustment?
Elizabeth: CMS has come out with their advanced notice and there’s going to be a lot more focus on obtaining available clinical data that supports a suspected member condition. Let’s use depression as an example. According to the notice, depression is no longer a risk-adjustable condition. And the reason is that they’re seeing an uptick in members going to their primary care physician, receiving a diagnosis of depression and a prescription. But there’s no evidence of treatment, like going to a mental health professional to get better. It would be like saying, “I suspect this member has cancer.” But if there’s no claim or treatment from an oncologist as supporting evidence—then this can’t be flagged. So, that’s going to be a big focus—getting supporting evidence.
Another big issue is closing as many prospective gaps as possible without overdoing it. You know, you don’t want to leave anything on the floor, but it has to be reasonable. A bird in the hand is better than 2 in the bush—so to speak. We know the member has these conditions, and we have some suspected conditions that have really strong evidence that allow us to include them in prospective programs—so we’re going to focus on those. It used to be let’s go full-bore after all the gaps prospectively and then follow up the year after, but that’s expensive.
Host: So the initial strategy was to get as many prospective gaps as you could in a year, and what you couldn’t get that year, you went after in retrospective programs the next year. What would be a better strategy?
Elizabeth: I think Advantasure was already on this, but this has been a newer concept for others in risk adjustment. For some time now, we’ve been working with a data science team building member condition models using machine learning. And this isn’t a one-and-done—the models need continual development, and they’re constantly evolving. For example, the models pick up trends like changes in treatment protocols. The program is churning through claims, lab results, pharmacy claims, etc. So, it’s picking up those changes. Along with this, we’ve also identified that while we want to use the machine learning gaps, and maybe the evidence does indicate a high probability of association with a particular member condition, the CDI alerts program has very structured language in terms of filters and what can be presented to a provider. So, bringing these 2 things together, we’ve built business rules and a file spec that says, “Give me machine learning conditions, but I don’t want just any evidence for that machine learning condition; I want the best evidence available: lab, pharmacy, or DME—something that will comply with a prospective program.
Host: You mentioned that the machine learning models are ciphering through and selecting the most compelling evidence. Can you dive deeper into this—how does this work?
Elizabeth: Yeah, we have filters that can hone in and make sure evidence includes specific values with probability scores. Again, we want to ensure the providers can trust our evidence on CDI alerts. We’re not going to send out something with a .01 percent probability score. No, we want to make targeted decisions. “We don’t just want all the conditions with lab evidence; we want the conditions with lab evidence that is accompanied by a probability score of .45 and higher. This is going to increase the provider’s trust. This ensures that the gaps we’re suggesting have validity and we’re not over-suspecting and diluting the results.
Host: Going back, you were talking about the machine learning filters…
Elizabeth: Yeah, the other thing that we do really well, is not just generating the machine learning results and probability scores, but we’ve got a cutting-edge way of how we’re able to include it in prospective programs. We have a file layout of values that we need returned within the machine learning results…and we have some business rules applied to those values, so if we have a machine learning model that’s running for diabetes. We’d input into the model, “Please give us lab, drug, or DME that has the highest probability within the model.” So, as an example, if a member has a diabetic shoe, the probability of the member having diabetes associated with the diabetic shoe might be .35. Whereas the lab result may be borderline, let’s say 6.8, that probability might be .2. Out of those 2, give me the DME. If none of those are available, neither lab, pharmacy, or DME, then you could give me the loosely based hypertension ICD code. Even CMS says if a member has hypertension, chances are they have diabetes. But, this has caused too many problems in the past, so this is only if the preferred evidence is unavailable. We don’t want to miss the gap altogether, so at least send the gap, we can make decisions later.
Host: Yes, that makes sense. While machine learning models are the norm now, it’s still incredible to watch the ciphering of information in action with things like probability scores and the selection of the most compelling evidence. Elizabeth, it’s been great having you on the podcast today. Thanks for all the expertise you bring to the industry.
Elizabeth: Anytime. I enjoyed talking today.
Host: Thanks to our listeners. If you’re enjoying the industry education, share it with your colleagues on LinkedIn, and sign up to receive notifications when new episodes drop by following us on Apple Podcast, Spotify, and other major podcast apps.