TABLE OF CONTENTS
About Our Guest
Introduction
Transcript
As payers adopt artificial intelligence (AI) technologies in different aspects of healthcare operations, there is a need for AI governance and the careful vetting of vendor AI practices to safeguard patient welfare.
AI solutions can offer valuable decision support to create efficiencies at scale, timeliness, and accuracy. However, AI solutions should not run autonomously, nor should the final result go unquestioned. It is essential that all stakeholders understand how AI solutions draw their conclusions, what data sources inform the models, and the potential sources of biases that can occur. This level of critical thinking via human oversight is the crux of responsible AI principles: transparency, accountability, and safety.
Tune in to this episode to hear the latest on:
Current challenges using AI for decision support
Responsible AI principles
The vital information needed for all stakeholders
Ways to implement best practice processes for AI oversight
The AI algorithm lawsuit that's shaking up the payer space
Guest speaker
Sam Keith
Data science, marketing, and analytics
Sam Keith is an expert in data science, marketing, and analytics. He has over 18 years of experience working in the technology product space, leading product development teams and initiatives to support consumer engagement, user experience, digital experience, and operations. Sam has worked in healthcare, higher education, pharmaceutical, and network security industries and is particularly interested in digital accessibility practices.
Host: Today, we’re talking about Responsible AI For Payers with Sam Keith. Sam is an expert in data science, marketing, and analytics. He has over 18 years of experience working in the technology product space, leading product development teams and initiatives to support consumer engagement, user experience, digital experience, and operations. He’s worked in healthcare, higher education, pharmaceutical, and network security industries and is particularly interested in digital accessibility practices. Welcome Sam.
Sam: Thanks for inviting me to the discussion!
Host: Sam, with the release of ChatGPT in November of 2022, there’s been a significant technological shift that we’re seeing across all industries. The use of artificial intelligence, or AI, has quickly become widespread for automating tasks—from simple commands like setting a timer via Alexa or Siri to automating financial investments to self-driving cars. It’s beyond amazing. For healthcare payers, there are a host of opportunities to create efficiencies at scale in day-to-day operations. Benefits abound. But I want to start by discussing the challenges of AI in our industry.
Sam: Yes. First, I want to echo the excitement--there’s no shortage of opportunity. In the payer space, we’re seeing new use cases every day across all aspects of the business—from core administrative efficiencies to driving risk adjustment and quality measurements. AI is truly changing the way the industry does business.
But we must temper our excitement enough to stay alert to the challenges. Basically, we need to be intentional about how we use AI. We want AI to help support decision-making, help us find operational efficiencies, and help us identify patterns or connections across large data sets that we’ve never identified before.
Our solutions, however, in the form of the AI algorithms we leverage, shouldn’t be a black box whose output is accepted without question. You know, data goes in, something happens to it, and data comes out, and then we administrate some action. It might feel like magic, but it’s not. Stakeholders must understand how AI is making its decisions, and more importantly, we must maintain the value people bring to the equation. Namely, machines cannot replace critical thinking, discernment, integrity, and contextual awareness.
Host: This sounds like a nice opportunity to jump into the topic of Responsible AI.
Sam: Yeah. Implicit in what I just mentioned is a key concept within Responsible AI, namely Transparency—the capacity to understand what your models are doing, the data that feeds or trains the models, and that data’s limitations. But before I go further down that path, let’s talk about what Responsible AI is.
So, Responsible AI refers to the development and use of artificial intelligence in a manner that supports transparency—like we were talking about accountability, safety, and privacy. Responsible AI emphasizes the importance of explainability and reliability in AI decisions, aiming to build trust and understanding between our AI systems and the people they support—our members. Following Responsible AI practices helps to ensure that the AI systems we develop are ethical and fair.
So, let’s think about Responsible AI and the concepts I shared about a recent lawsuit involving an AI algorithm.
In this case, involving a national payor, it’s alleged that an AI algorithm was used to systematically deny elderly patients’ claims for extended care, including stays in skilled nursing facilities and in-home care. The lawsuit claims that when these denials were appealed, about 90% were reversed, suggesting the algorithm’s inaccuracy.
The payer asserts there is no merit to the suit, and time will tell where it lands.
So, let’s look at this case in a Responsible AI context and see what we can learn from what happened. And let’s apply the RAI principles of transparency, accountability, and safety standpoints. I’m omitting Privacy from the discussion as nothing was shared to indicate any privacy concerns.
Before I get started, I want our listeners to understand that even following Responsible AI principles to a T doesn’t mean your AI solutions are entirely shielded from concern, but what it does mean is that you’ll be much more resilient if you do encounter issues.
So, from a Transparency standpoint, the details available to us are a bit thin, short of denials being received. Nor is it clear what specifically triggered the denial.
Now, no one is asserting that 100% of all claims should be approved. Still, when denials occur, we would expect that A. that’s the minority of claims, and B. that reasons for denial were well understood from the payer standpoint—the following set of contractual conditions were not met, and the claim is denied on that basis—and that’s clearly communicated to the member along with a mechanism to support an appeal.
Was the payor able to explain to patients and their providers how the determination was made? That isn’t clear, either. We know that in the cases that were appealed, 90% of the denials were reversed. If that’s true, it’s likely the algorithm wasn’t considering features specific to individual cases that would have provided additional weight to approving the extension of care. It could also be that those features were being considered but not ascribed enough weight to make a difference.
This case speaks directly to people’s fears around AI in that they’re not being treated as individuals but as members of a population. The algorithm says people your age require an average of 14 days of SNF care. You’ve had your 14 days, so your claim for an extension is denied—independent of any extenuating circumstances.
Host: So, there’s an accountability component here, too, right?
Sam: Absolutely. I’ve been talking about transparency, but that discussion almost immediately blends into the RAI concept of accountability.
I mentioned earlier that we expect some claims to be denied on a reasonable set of merits. So, as a payor, do you know what that statistical average is for, in this case, extended skilled nursing facility stays? For the sake of argument, let’s say that the historical average is 10% of claims.
Now, your team is getting ready to launch a new AI-based algorithm to support claims reviews for extended stays, and using this tool, and during testing, claims denial rates go up to 30%. Your team is gearing up to put this system into production.
Is that 30% denial rate accurate? It might be, but who’s accountable for making that assessment? What has the AI identified in the form of features within the available claims data that is leading to this increase? Is it something new we haven’t been factoring in as a function of our legacy mechanism? Are those new things identified as valid? Do they work in the real world? And if I return to the concept of transparency, can you even tell what those features are?
So, when we consider the case of this national payor, the 90% success rate on appeals is compelling. Was there a team in place that examined that percentage of denials and clearly understood that denial rate to be reasonable?
Accountability in an RAI context means there must be clear responsibility for the decisions made by an AI algorithm. This includes addressing the performance and any negative outcomes of the AI's decisions. Do you have a team in place that’s ready to handle these issues?
As an important side note, if your organization is new to the AI space, I recommend you develop solutions that supplement human decision-making rather than replace it. This approach gives you the opportunity to allow your solutions to mature and will help you gather feedback on how well the AI works in its given capacity. That feedback is crucial in training the AI to correctly identify where its recommendations lack accuracy.
Host: Given what you’ve shared about transparency and accountability, it sounds like following Responsible AI principles provides a set of healthy best practices. Is that right?
Sam: Yes, between what we chatted about so far, taking Transparency practices into account means you’re implementing AI mechanisms you can explain. Secondary to that, using accountability practices means you have people in place to help manage the AI algorithm and clearly understand when the model is performing optimally and when it’s not. The two elements, in combination, help cover you on the front and back end of the AI solutions you implement.
Host: Absent these practices, are there any concerns around member safety?
Sam: Yes, Safety is another important Responsible AI principle. When I talk about safety in an RAI context, I mean that you should work toward developing AI solutions that add to your members' experience and don’t detract from it.
It’s important that the excitement of getting new capabilities into the market isn’t prioritized over the safety and fairness of our member’s healthcare.
AI tools that automate prior authorizations, like in the national payor case I mentioned, must be rigorously tested to ensure those systems do not deny appropriate claims. Better yet, ensure that there’s expert oversight when an AI-based solution recommends a claim denial.
Host: That’s a good point. I can see how algorithms can have unintended outcomes. What can payers do to vet their vendors and ensure their algorithms have no inherent flaws?
Sam: It’s a little challenging because there is a lot of intellectual property tied into the recipe of an algorithm. Vendors, understandably, don’t want to give that knowledge away. But on the flip side, it’s not sufficient to accept a vendor saying, “Oh, the model is great, and we’ve seen these results, and you’ll love it.” That’s the black box model we want to avoid. And that’s a problem in the healthcare space because we’re dealing with people’s lives.
In terms of a few questions you could ask, a good place to start is by mentioning that Responsible AI principles are important to your organization and your members, and you want to understand how the vendor supports RAI practices in their development and the lifecycle management of their models. Listen to how they address transparency, accountability, safety, and privacy.
What data sources were used to train their models? Are those data sources comparable to your own regarding demographics, race, sex, and age, for example?
Have they encountered any bias in their modeling efforts? How did they manage it?
Are the models autonomous, or is there human oversight? As I mentioned, consider avoiding autonomous solutions in favor of options that support human decision-making.
It’s also important to mention that the federal government is actively focusing on the development of a strategic plan to incorporate RAI principles in the Health and Human Services sector. That strategy will emphasize healthcare delivery and financing, long-term safety and real-world performance monitoring, monitoring for bias and discrimination, transparency and documentation, and reducing administrative burdens.
I recently heard a practical example of this that described the output of leveraging RAI principles as something akin to a nutrition label. If you pick up a box of cereal, it’s going to tell you exactly what ingredients and nutrients are in it. The label isn’t going to give away the recipe or secret sauce, but it should provide enough transparency for a consumer to make an informed decision.
Following RAI principles will position your organization to produce the equivalent of a nutrition label for the AI solutions you implement.
Expanding that example into an AI context means the ingredients portion of the label could list the data sources used for training, describing the quantity and quality of the data.
The Nutritional Facts section might include the type of AI model, like a neural network or decision tree.
Allergen info could highlight any known biases in the model and its training data and limitations on the model’s applicability.
The Recommended Daily Value could reflect the model’s accuracy, precision, and recall—these are standard performance metrics.
Finally, the Serving Size could highlight the model's intended use cases and the context in which it performs best.
What might be on the nutrition labels for your existing AI solutions or the ones you have planned? Is that a question you can answer? Is it blank?
As I mentioned earlier in this conversation, it’s important to be intentional about how you use AI. RAI practices will help ensure you’re on a sustainable path and will prepare you for future regulation we know is coming down the road.
Host: Sam, thank you for all the insights you offered today. It’s essential that the industry keeps having these types of conversations and prioritizes the implementation of safeguards so we can enjoy the benefits of AI responsibly.
Sam: I 100% agree. Thanks so much for having me.
Host: Thanks to our listeners. If you found value in this episode, please leave a review on Apple Podcast or Spotify and share it with your colleagues on LinkedIn.