Skip to main content

Better Together: HealthEdge + UST HealthProof Merge Under Bain Capital

Explore Our Unified Capabilities

About us

We are trusted allies of health plans and organizations, simplifying their complex operational and quality challenges across all lines of business

Who we are

Market leaders with a commitment to excellence in healthcare technology and outcomes

right-arrow
Our Leadership

Our team of dynamic thought leaders and innovators, driving transformative change in healthcare technology

right-arrow

Join our team

Work with us to unburden healthcare and improve people’s health and wellbeing

JOIN US

menu_join our team.svg

What we offer

PRODUCT AND SOLUTIONS

BPaaS Ecosystem

Integrated ecosystem of best-in-class, scalable, turnkey solutions for health plans across all lines of business

Explore

HIGHLIGHTS

Our BPaaS advantage

Move from the burden of fragmented legacy systems to scalability, efficiency, and predictable outcomes

right-arrow

Our approach

How we consistently deliver value and earn the trust of our clients

right-arrow

Solution finder

Discover what works best for you. Use our intuitive solution finder tool to navigate through our suite of offerings and uncover tailored solutions designed just for you

GET STARTED NOW
menu_solutions-finder.svg

Core Administration

Suite of leading-edge core admin solutions designed to deliver outcomes with reduced costs across multiple lines of business

explore

SOLUTION GROUPS

Enrollment Plus™

Streamline enrollment processes and increase member satisfaction

right-arrow

Engagement Plus®

Next best actions for data-driven personalized care

right-arrow

Integration Plus®

Seamlessly integrate all your systems and data​

right-arrow

Workflow Plus®

Automate and optimize operations

right-arrow

Insitz Plus

Operational insights, better and faster​

right-arrow

Solution finder

Discover what works best for you. Use our intuitive solution finder tool to navigate through our suite of offerings and uncover tailored solutions designed just for you

GET STARTED NOW
menu_solutions finder.svg

Risk Adjustment

Proven technology and services solutions for health care plans and organizations, including Medicare Advantage

Explore

SOLUTION GROUPS

Retrospective Risk Adjustment Solutions

Improve reimbursement accuracy, optimize gap and chase, and streamline submissions

right-arrow

Prospective Risk Adjustment Solutions

Proactively close gaps at the point of care and maintain compliance

right-arrow

Solution finder

Discover what works best for you. Use our intuitive solution finder tool to navigate through our suite of offerings and uncover tailored solutions designed just for you

GET STARTED NOW
menu_solutions finder.svg

Quality Improvement and Stars

Proven, tailored programs combined with strong industry expertise to help plans measure, analyze, and implement quality and Stars strategies with impact

Explore

SOLUTION GROUPS

Stars Consulting Services

Pinpoint areas for improvement to maximize your Star potential

right-arrow

Quality360™ HEDIS® Engine i-button

Manage Quality performance with an end-to-end solution

HEDIS® is a registered trademark of the National Committee for Quality Assurance (NCQA)

right-arrow

Solution finder

Discover what works best for you. Use our intuitive solution finder tool to navigate through our suite of offerings and uncover tailored solutions designed just for you

GET STARTED NOW
menu_solutions finder.svg

Advisory Services

Expert advisory services help optimize core health plan operations, reduce costs, enhance compliance, and drive improved member satisfaction and scalability.

Explore

HIGHLIGHTS

Core Administration

Expert guidance that helps streamline core operations, lower costs, improve compliance, and enhance member satisfaction with scalable solutions.

right-arrow

Risk Adjustment

Strategic services that optimize risk adjustment accuracy, ensure data integrity, and support revenue integrity and regulatory compliance.

right-arrow

Solution finder

Discover what works best for you. Use our intuitive solution finder tool to navigate through our suite of offerings and uncover tailored solutions designed just for you

GET STARTED NOW
menu_solutions finder.svg

Who we help

As market leaders with decades of domain experience, we engage with a diverse range of sector, from commercial, to private to government, across all lines of business

LEARN MORE
Government-sponsored plans

Lower operating costs and improve outcomes with specialized solutions that meet regulatory requirements and member needs

right-arrow
Commercial/Private plans

Improved efficiency and lower costs delivered through Core Administration, Risk Adjustment, Quality and Care Management Solutions

right-arrow

Featured insights

BPaaS New Era Efficiency

Whitepaper

Next-generation BPaaS: A New Era of Efficiency for Health Plans

BPaaS
Doctor holding clipboard looking at screen

Whitepaper

Turning Gaps into Solutions: Advancing Provider Tools for Quality Care

Quality & Stars

Resources

Insights

Perspectives, best practices, and innovation for health plans from our team of experts and around the industry

EXPLORE

Case studies

Real-world success stories and solutions

right-arrow

Blogs & articles

Industry analyses, opinions, and trends

right-arrow

Webinars

Online seminars for industry education

right-arrow

Brochures

Detailed information on products and services

right-arrow

Videos

Insights from leadership, solution info, promotional clips and product demos

right-arrow

E-books

Digital guidebooks for payers

right-arrow

Whitepapers

In-depth reports and informative research

right-arrow

Knowledge hub

Simplifying complex information into easy-to-understand terms

right-arrow

Featured insights

Supplemental Data woman and man conversing

Knowledge hub

Using Supplemental Data Effectively for Stars and Quality Optimization

HEDIS
Why Mid-Size Payers are Uniquely Affected by Operational Inefficiency

E-book

Why Mid-Size Payers are Uniquely Affected by Operational Inefficiency

BPaaS
Healthcare worker writing on a clipboard

Case study

Fixing the Foundation Case Study

Core Administration

Podcast

Integrated ecosystem of best-in-class, scalable, turnkey solutions for health plans across all lines of business

EXPLORE

LATEST SEASON

podcasts small thumbnail

Season 4

Move from the burden of fragmented legacy systems to scalability

right-arrow

PAST SEASONS

Season 3

right-arrow

Season 2

right-arrow

Season 1

right-arrow

Latest episodes

podcasts thumbnail

Podcast

S4

Upcoming on Season 4

General
podcasts medium thumbnail

Podcast

S3 E10

No Analysts Needed

Core Administration
podcasts medium thumbnail

Podcast

S3

RADV Acceleration: What It Means for Medicare Advantage Plans

Risk Adjustment

Newsroom

News and updates about UST HealthProof, the healthcare industry, events, and more

EXPLORE

News

UST HealthProof in the news and 
industry updates

right-arrow

Events

Upcoming conferences and industry events

right-arrow

Press releases

Official UST HealthProof announcements 
and updates

right-arrow

Featured insights

UST HealthProof Operational Team

News

UST HealthProof Strengthens Healthcare Operations with Two New Leaders

General
Anniversary text

News

UST HealthProof Marks 9th Anniversary

General
Careers right-arrow

EXPLORE CAPABILITIES

BPaaS Ecosystem Core Administration Risk Adjustment Quality and Stars

POPULAR SEARCHES

search Care Management search Core Administration search BPaaS search Risk Adjustment search Industry Insights search Client Success Stories search Quality & Stars
Let's talk

EXPLORE CAPABILITIES

BPaaS Ecosystem Core Administration Risk Adjustment Quality and Stars

POPULAR SEARCHES

search Care Management search Core Administration search BPaaS search Risk Adjustment search Industry Insights search Client Success Stories search Quality & Stars
Listen on
Spotify logo
Spotify
Apple podcasts logo
Apple podcasts


Share

Copied to clipboard
Podcast
S2
E8
Last updated: Jul 04, 2024

Responsible AI For Payers

Listen on
Spotify logo
Spotify
Apple podcasts logo
Apple podcasts

Introduction

Transcript

As payers adopt artificial intelligence (AI) technologies in different aspects of healthcare operations, there is a need for AI governance and the careful vetting of vendor AI practices to safeguard patient welfare.

AI solutions can offer valuable decision support to create efficiencies at scale, timeliness, and accuracy. However, AI solutions should not run autonomously, nor should the final result go unquestioned. It is essential that all stakeholders understand how AI solutions draw their conclusions, what data sources inform the models, and the potential sources of biases that can occur. This level of critical thinking via human oversight is the crux of responsible AI principles: transparency, accountability, and safety.

Tune in to this episode to hear the latest on: 

  • Current challenges using AI for decision support

  • Responsible AI principles 

  • The vital information needed for all stakeholders

  • Ways to implement best practice processes for AI oversight 

  • The AI algorithm lawsuit that's shaking up the payer space

Guest speaker

Sam Keith

Data Science, Marketing, and Analytics

Sam Keith is an expert in data science, marketing, and analytics. He has over 18 years of experience working in the technology product space, leading product development teams and initiatives to support consumer engagement, user experience, digital experience, and operations. Sam has worked in healthcare, higher education, pharmaceutical, and network security industries and is particularly interested in digital accessibility practices.  

Host: Today, we’re talking about Responsible AI For Payers with Sam Keith. Sam is an expert in data science, marketing, and analytics. He has over 18 years of experience working in the technology product space, leading product development teams and initiatives to support consumer engagement, user experience, digital experience, and operations. He’s worked in healthcare, higher education, pharmaceutical, and network security industries and is particularly interested in digital accessibility practices. Welcome Sam.

Sam: Thanks for inviting me to the discussion!

Host: Sam, with the release of ChatGPT in November of 2022, there’s been a significant technological shift that we’re seeing across all industries. The use of artificial intelligence, or AI, has quickly become widespread for automating tasks—from simple commands like setting a timer via Alexa or Siri to automating financial investments to self-driving cars. It’s beyond amazing. For healthcare payers, there are a host of opportunities to create efficiencies at scale in day-to-day operations. Benefits abound. But I want to start by discussing the challenges of AI in our industry.

Sam: Yes. First, I want to echo the excitement--there’s no shortage of opportunity. In the payer space, we’re seeing new use cases every day across all aspects of the business—from core administrative efficiencies to driving risk adjustment and quality measurements. AI is truly changing the way the industry does business. 

But we must temper our excitement enough to stay alert to the challenges. Basically, we need to be intentional about how we use AI. We want AI to help support decision-making, help us find operational efficiencies, and help us identify patterns or connections across large data sets that we’ve never identified before.

Our solutions, however, in the form of the AI algorithms we leverage, shouldn’t be a black box whose output is accepted without question. You know, data goes in, something happens to it, and data comes out, and then we administrate some action. It might feel like magic, but it’s not. Stakeholders must understand how AI is making its decisions, and more importantly, we must maintain the value people bring to the equation. Namely, machines cannot replace critical thinking, discernment, integrity, and contextual awareness.

Host: This sounds like a nice opportunity to jump into the topic of Responsible AI.

Sam: Yeah. Implicit in what I just mentioned is a key concept within Responsible AI, namely Transparency—the capacity to understand what your models are doing, the data that feeds or trains the models, and that data’s limitations. But before I go further down that path, let’s talk about what Responsible AI is.

So, Responsible AI refers to the development and use of artificial intelligence in a manner that supports transparency—like we were talking about accountability, safety, and privacy. Responsible AI emphasizes the importance of explainability and reliability in AI decisions, aiming to build trust and understanding between our AI systems and the people they support—our members. Following Responsible AI practices helps to ensure that the AI systems we develop are ethical and fair.

So, let’s think about Responsible AI and the concepts I shared about a recent lawsuit involving an AI algorithm.  

In this case, involving a national payor, it’s alleged that an AI algorithm was used to systematically deny elderly patients’ claims for extended care, including stays in skilled nursing facilities and in-home care. The lawsuit claims that when these denials were appealed, about 90% were reversed, suggesting the algorithm’s inaccuracy.

The payer asserts there is no merit to the suit, and time will tell where it lands.

So, let’s look at this case in a Responsible AI context and see what we can learn from what happened. And let’s apply the RAI principles of transparency, accountability, and safety standpoints. I’m omitting Privacy from the discussion as nothing was shared to indicate any privacy concerns.

Before I get started, I want our listeners to understand that even following Responsible AI principles to a T doesn’t mean your AI solutions are entirely shielded from concern, but what it does mean is that you’ll be much more resilient if you do encounter issues.

So, from a Transparency standpoint, the details available to us are a bit thin, short of denials being received. Nor is it clear what specifically triggered the denial.

Now, no one is asserting that 100% of all claims should be approved. Still, when denials occur, we would expect that A. that’s the minority of claims, and B. that reasons for denial were well understood from the payer standpoint—the following set of contractual conditions were not met, and the claim is denied on that basis—and that’s clearly communicated to the member along with a mechanism to support an appeal.

Was the payor able to explain to patients and their providers how the determination was made? That isn’t clear, either. We know that in the cases that were appealed, 90% of the denials were reversed. If that’s true, it’s likely the algorithm wasn’t considering features specific to individual cases that would have provided additional weight to approving the extension of care. It could also be that those features were being considered but not ascribed enough weight to make a difference.

This case speaks directly to people’s fears around AI in that they’re not being treated as individuals but as members of a population. The algorithm says people your age require an average of 14 days of SNF care. You’ve had your 14 days, so your claim for an extension is denied—independent of any extenuating circumstances.

Host: So, there’s an accountability component here, too, right?

Sam: Absolutely. I’ve been talking about transparency, but that discussion almost immediately blends into the RAI concept of accountability.  

I mentioned earlier that we expect some claims to be denied on a reasonable set of merits. So, as a payor, do you know what that statistical average is for, in this case, extended skilled nursing facility stays? For the sake of argument, let’s say that the historical average is 10% of claims.

Now, your team is getting ready to launch a new AI-based algorithm to support claims reviews for extended stays, and using this tool, and during testing, claims denial rates go up to 30%. Your team is gearing up to put this system into production.

Is that 30% denial rate accurate? It might be, but who’s accountable for making that assessment? What has the AI identified in the form of features within the available claims data that is leading to this increase? Is it something new we haven’t been factoring in as a function of our legacy mechanism? Are those new things identified as valid? Do they work in the real world? And if I return to the concept of transparency, can you even tell what those features are?

So, when we consider the case of this national payor, the 90% success rate on appeals is compelling. Was there a team in place that examined that percentage of denials and clearly understood that denial rate to be reasonable?

Accountability in an RAI context means there must be clear responsibility for the decisions made by an AI algorithm. This includes addressing the performance and any negative outcomes of the AI's decisions. Do you have a team in place that’s ready to handle these issues?

As an important side note, if your organization is new to the AI space, I recommend you develop solutions that supplement human decision-making rather than replace it. This approach gives you the opportunity to allow your solutions to mature and will help you gather feedback on how well the AI works in its given capacity. That feedback is crucial in training the AI to correctly identify where its recommendations lack accuracy.

Host: Given what you’ve shared about transparency and accountability, it sounds like following Responsible AI principles provides a set of healthy best practices. Is that right?

Sam: Yes, between what we chatted about so far, taking Transparency practices into account means you’re implementing AI mechanisms you can explain. Secondary to that, using accountability practices means you have people in place to help manage the AI algorithm and clearly understand when the model is performing optimally and when it’s not. The two elements, in combination, help cover you on the front and back end of the AI solutions you implement.

Host: Absent these practices, are there any concerns around member safety?

Sam: Yes, Safety is another important Responsible AI principle. When I talk about safety in an RAI context, I mean that you should work toward developing AI solutions that add to your members' experience and don’t detract from it.

It’s important that the excitement of getting new capabilities into the market isn’t prioritized over the safety and fairness of our member’s healthcare.

AI tools that automate prior authorizations, like in the national payor case I mentioned, must be rigorously tested to ensure those systems do not deny appropriate claims. Better yet, ensure that there’s expert oversight when an AI-based solution recommends a claim denial.

Host: That’s a good point. I can see how algorithms can have unintended outcomes. What can payers do to vet their vendors and ensure their algorithms have no inherent flaws?  

Sam: It’s a little challenging because there is a lot of intellectual property tied into the recipe of an algorithm. Vendors, understandably, don’t want to give that knowledge away. But on the flip side, it’s not sufficient to accept a vendor saying, “Oh, the model is great, and we’ve seen these results, and you’ll love it.” That’s the black box model we want to avoid. And that’s a problem in the healthcare space because we’re dealing with people’s lives.

In terms of a few questions you could ask, a good place to start is by mentioning that Responsible AI principles are important to your organization and your members, and you want to understand how the vendor supports RAI practices in their development and the lifecycle management of their models. Listen to how they address transparency, accountability, safety, and privacy.

What data sources were used to train their models? Are those data sources comparable to your own regarding demographics, race, sex, and age, for example?  

Have they encountered any bias in their modeling efforts? How did they manage it?

Are the models autonomous, or is there human oversight? As I mentioned, consider avoiding autonomous solutions in favor of options that support human decision-making.

It’s also important to mention that the federal government is actively focusing on the development of a strategic plan to incorporate RAI principles in the Health and Human Services sector. That strategy will emphasize healthcare delivery and financing, long-term safety and real-world performance monitoring, monitoring for bias and discrimination, transparency and documentation, and reducing administrative burdens.

I recently heard a practical example of this that described the output of leveraging RAI principles as something akin to a nutrition label. If you pick up a box of cereal, it’s going to tell you exactly what ingredients and nutrients are in it. The label isn’t going to give away the recipe or secret sauce, but it should provide enough transparency for a consumer to make an informed decision.

Following RAI principles will position your organization to produce the equivalent of a nutrition label for the AI solutions you implement.

Expanding that example into an AI context means the ingredients portion of the label could list the data sources used for training, describing the quantity and quality of the data.

The Nutritional Facts section might include the type of AI model, like a neural network or decision tree.

Allergen info could highlight any known biases in the model and its training data and limitations on the model’s applicability.

The Recommended Daily Value could reflect the model’s accuracy, precision, and recall—these are standard performance metrics.

Finally, the Serving Size could highlight the model's intended use cases and the context in which it performs best.

What might be on the nutrition labels for your existing AI solutions or the ones you have planned? Is that a question you can answer? Is it blank?

As I mentioned earlier in this conversation, it’s important to be intentional about how you use AI. RAI practices will help ensure you’re on a sustainable path and will prepare you for future regulation we know is coming down the road.

Host: Sam, thank you for all the insights you offered today. It’s essential that the industry keeps having these types of conversations and prioritizes the implementation of safeguards so we can enjoy the benefits of AI responsibly.  

Sam: I 100% agree. Thanks so much for having me.  

Host: Thanks to our listeners. If you found value in this episode, please leave a review on Apple Podcast or Spotify and share it with your colleagues on LinkedIn.


Share

Copied to clipboard

Browse all Podcasts

VIEW ALL

Lessons From 2026 Enrollment

Podcast S4 E5
Lessons From 2026 Enrollment
Core Administration BPaaS

The Problem with Managing Multiple Vendors

Podcast S4 E5
The Problem with Managing Multiple Vendors
BPaaS

Pay Now or Later: What Whistleblower Lawsuits Reveal About Risk Adjustment

Podcast S4 E3
Pay Now or Pay Later: Whistleblower Lawsuits and Risk Adjustment
Risk Adjustment

Let us help unburden your plan

Helping you to focus on what matters the most – your members' care

GET IN TOUCH

Footer

  • CAPABILITIES
    • BPaaS Ecosystem
    • Core Administration
    • Risk Adjustment
    • Quality Improvement and Stars
    • Advisory Services
  • ABOUT US
    • Who we are
    • Our leadership
    • Careers
  • WHO WE HELP
    • Government-Sponsored plans
    • Commercial/Private plans
  • RESOURCES
    • Insights
    • Newsroom
    • Podcasts
© UST HealthProof 2026 Privacy policy Terms Site map