Report: Where will 2024 take AI in healthcare?

One year ago, we wrote about AI or large language models (LLMs)  in healthcare. We looked at the potential for AI tools to help with staffing shortages, administrative burden, and patient education.

Since then AI has mainstreamed – particularly ChatGPT, which now has 180 million monthly users. More than 92% of Fortune 100 companies are using AI in their business operations for tasks like pricing optimization or chatbot customer service tools, while personal use includes everything from completing homework assignments to hoaxing Sasquatch sightings.

And, of course, there are the deepfakes. In January, a release of Taylor Swift deepfakes prompted a response from the White House; last week, a finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as leadership on a video conference call. Think about that. A trained and savvy employee, who was already suspicious about the original funds transfer request, was completely convinced by AI creations who looked and sounded exactly the colleagues he worked with every day.

How will this affect healthcare? Where is AI taking us?

Winning or Losing the AI Race

If there’s one comment that echoes across industry conversations, it’s this: “I thought we had more time.” By that, people mean more time to fully understand AI capabilities; more time to harness its power to our advantage; more time to protect ourselves from its dangers. But while some healthcare systems and business leaders theorize, test, and ponder the uses of AI in healthcare, others are racing full speed ahead.

Here are a few things we’ve learned:

~ This landscape is evolving fast. If you watched Open.AI CEO Sam Altman get fired from Open.AI and rehired by Microsoft in December, you saw firsthand how many ambiguities, unknowns, and questions abound even among its creators.

~ Smart leaders are appointing dedicated AI directors to monitor the unfolding changes. Initially, many companies banned AI in their workplaces out of (well-founded) security fears, but most realized their staff and partners and customers would use it regardless. Today, most companies have moved toward implementing some kind of oversight and control.

~ AI tools are not as reliable as, say, Microsoft Excel or your typical SaaS solution. Prone to “hallucinations,” their output must be checked thoroughly. Former Open.AI CEO Sam Altman tweeted of ChatGPT, “It’s a mistake to be relying on it for anything important right now… We have lots of work to do on robustness and truthfulness.” ChatGPT frequently cites fictional sources in written work, credits people with expertise they don’t have in biographies, and attributes achievements to companies that just don’t exist.

~ Despite early challenges, healthcare administration is finding valid uses for AI. Some of those include automating prior authorizations and revenue cycle management, improving robotic surgery, and the use of predictive analytics.

~ Because adoption outpaced training, millions of users have entered confidential information into the public domain. Pricing models, employee information, C-suite salaries, intellectual property, trade secrets, and ePHI have been recklessly fed into AI tools, which makes them available for anyone who asks. Mike Wooldridge, a professor of AI at Oxford University, reminded us that “anything you type into ChatGPT is just going to be fed directly into future versions of ChatGPT.” Retractions are not an option.

~ Crucially, people haven’t yet grasped the essence of AI. Mike Woolridge also pointed out that our human nature is designed to look for consciousness in AI – to anthropomorphize it and assign it a personality. People assume the software will use some type of common sense and decorum in its output. But AI, he said, “has no empathy. It has no sympathy.”

 

So What Does This Mean for AI in Healthcare?

It’s no secret that healthcare is always looking for ways to cut costs while improving patient care. Tech-forward healthcare leaders are enthusiastic about AI’s potential to expedite workflows and develop clinical innovations. If technology can mitigate the burden on overworked staff, even better.

The issue? Most providers, administrators, and patients haven’t been trained on using AI safely and productively. And trusting blindly in AI can be lethal in healthcare. So where can AI help or harm?

 AI the Helper

Behind every patient visit is a river of non-clinical processes – invoices, prior authorizations, insurance claims, data management, physician credentialing, scheduling, revenue cycle management, HIPAA compliance, and more. AI is already being deployed to detect coding and documentation errors, refine reimbursement models, or create a unified source of data out of previously siloed data trapped in legacy systems.

AI offers clinical benefits as well. Better EHR data access can help providers make more informed decisions at the point of care. By providing more context, this data can improve clinical care plans and reduce rehospitalization rates. Providers can also use AI to detect contraindications or translate medical records into plain language on discharge summaries or create faster responses to patient inquiries.

AI the Danger

When the Internet was born, patients immediately began looking up their medical symptoms to diagnose themselves. Those “cyberchondriacs” are now turning to Dr. AI. But despite earlier tests in which ChatGPT passed med school exams, it has failed to answer basic medical questions correctly. For instance, it stated that two medications taken together would have no adverse effects when they would. It fabricated references to support erroneous claims.

This means patients who turn to AI for answers are at risk of getting bad, even harmful, medical advice. They also miss out on the context that only qualified providers can give, in which care delivery is tailored to the patient’s cultural background, language barriers, and medical history.

Data privacy and fraud are other concerns. Hackers love healthcare; we’re their #1 target. What happens when fraudsters can perfectly imitate a request for medical records or impersonate a chief of staff on a video call? Healthcare’s very size and complexity offer numerous entry points for deception, from a claims processor to a pharmacist to a charge nurse to a lab technician. Just one patient’s care journey contains innumerable people who don’t really know each other, but depend on each other to complete clinical and administrative tasks. How we verify identity and truthfulness in even casual transactions is fast becoming an urgent task.

What Healthcare Systems Can Do

These recommendations don’t offer a complete list, but rather a starting point for an actionable plan.

  1. Every healthcare workplace must work with their IT and security teams and if necessary, bring in third-party consultants who understand AI. Together, they can develop an Acceptable Use Policy that all contractors and employees sign off on.
  2. That should include training on smart AI practices. Just as companies train workforces on spotting phishing emails, for instance, they’ll need to train clinical and administrative staff on how and when to use AI – and when not to. Providers will also need to warn patients that AI medical advice cannot be trusted.
  3. Next, they need to invest in the right tools, talent, and technologies to leverage AI’s capabilities fully. AI innovation isn’t just for the Mayo Clinic or Sloan Kettering or Dana Farber Cancer Institute; all healthcare systems will be expected to implement AI-assisted operational efficiencies wherever possible.
  4. Employees who do use AI in their work will need to check all output carefully and thoroughly. Is it factually correct? Informed and instructive? Are the sources cited real and verified? Does the work reflect the company’s culture and lexicon?
  5. Any patient- or public-facing service channels must be tested again and again before release. That includes chatbots, appointment calendars, and patient correspondence. Let’s learn from the travel industry – this holiday season, after implementing AI-assisted service options, they faced howls of outrage from customers who accused AI “service agents” of inaccurate, generic, and just plain weird replies that ignored their questions and concerns. The stakes are much higher in healthcare than a cancelled flight or missing luggage – there’s simply no room for that kind of failure.

 

So what’s the final verdict for AI in healthcare?

There isn’t one. AI is a disruptor, with positive and destructive repercussions we probably can’t guess at yet yet. But it’s also poised to be an engine of innovation, improved service delivery, and some badly needed cost savings. We can get there, if we install appropriate guardrails to ensure patient safety and clinical accuracy. That work starts now, because AI is already way ahead of us.

Leave a Reply

Your email address will not be published. Required fields are marked *

Want to Read More?