Unless you’ve been living on the International Space Station, you’ve probably heard of new artificial intelligence (AI) tool ChatGPT by now. It’s dominated the news for the last 2 months, astounding everyone with its ability to code software, write TV scripts, solve physics equations, prepare legal arguments, and generally make humans feel like we’re about to be replaced.
Now it’s taken – and passed – US medical licensing exams (USMLE.) Researchers at Ansible Health, a California-based healthcare provider, had the program take a version of the exam, which is required to practice medicine in the US. The USMLE consists of three exams, taken during and after medical school, which require long periods of study. ChatGPT answered questions from previous exams (after researchers confirmed the program didn’t have the answers in its database) and two physicians scored its responses.
The result: ChatGPT approached the USMLE pass threshold of about 60 percent. Pretty impressive for a program that received no specialized training.
If you haven’t played with ChatGPT yet, you can try it right here. (You can also try its sister program DALL-E 2 to create imagery.) It’s not sentient, but it does use “reinforcement learning” to answer your questions by hunting for data sources and then analyzing the information it finds. Human responses and reinforcement help it become smarter over time.
How Does AI Fit into a Clinical Context?
For years now, patients have relied on “Dr. Google,” which they use to check symptoms, diagnose themselves, and sometimes inflict themselves with what is fondly known as “cyberchondria.” Now they’ve moved onto Dr. AI. Some of their activities are positive; ChatGPT can translate their medical records into plain language, provide detailed clinical analysis of their conditions, and detect contraindications. But there’s also plenty of room for incorrect self-diagnosis and misunderstandings. Without the contextual input of a provider’s care, patients can drift into ChatGPT-WebMD mashup territory and put too much trust on the program’s information.
And for clinicians and healthcare administrators? ChatGPT offers some valuable skills. So far, it seems to understand medical language better than other AIs. The Ansible researchers found it was capable of drafting invoices, simplifying jargon-dense radiology reports, and offering diagnostic guidance. Overall, it reduced time spent on documentation and patient care tasks by 33%.
The My Medical team tested ChatGPT by asking it a variety of medical questions, such as What medications are usually prescribed for a patient with a suspected NSTEMI? Or: Given the following guidance, would I test thyroid function in a patient feeling hot who has a chest infection? Please justify your answer.
They were more than satisfied with its answers – and like the Ansible team, they found it offered complex information faster than a human. And, of course, ChatGPT did it for free.
Can AI Help with Staffing Shortages?
For decades, business leaders have contemplated the workforce takeover of automation and AI. Mostly, AI has been envisioned as a tool for healthcare and not a threat to – or a substitute for – human workers. When it comes to replacing staff, it’s become a common talking point to say, “that’s like Excel replacing accountants.”
Yet at some point, AI-driven robots will be able to take over simple tasks, freeing providers to spend more time with patients. They might disinfect an exam room, fetch medical equipment, and drop off lab specimens. Nurse robots like Grace can take a patient’s vitals and speak multiple languages, eliminating the need for an interpreter. A report from EIT Health and McKinsey indicated that AI automation could assist with staff shortages, accelerate research, and streamline or remove 20-80% of physician and nurse activities. This is good news in an industry facing a talent shortage.
But when it comes to direct patient care, AI is far from replicating the human touch. ChatGPT applications tend to lack nuance, sensitivity, and authenticity. One egregious example: when Kentucky Fried Chicken used AI to create regionally relevant ads, the program published ads on Kristallnacht that urged German readers, “It’s memorial day for Kristallnacht! Treat yourself with more tender cheese on your crispy chicken. Now at KFCheese!”
There is zero room for a callous mistake like that in healthcare – and no substitute for the care of a compassionate provider.
AI and Medical Education
Over the last two months, ChatGPT and DALL-E 2 have produced gorgeous art, written short stories, and answered complex academic questions. This has annoyed visual artists who complain their work is being used without credit, unnerved writers who feel replaced – and it has stymied educators, who know some students are using ChatGPT to do homework and write papers.
Currently, the program’s writing style is bland and it tends to plagiarize existing work without citation, which makes it easy to detect. But with an eye on the future, some companies are updating academic integrity software and developing other detection tools. Some school systems are reinstating pen and paper exams. It will be impossible to keep students from using AI entirely, which may spark a shift toward more oral examinations and fewer take-home assignments. New safeguards will be especially critical in nursing and medical schools, which will need to protect the integrity of academic assessments and clinical degree programs.
The Future of Healthcare Technology
One day – maybe soon – this article will be outdated and inaccurate. As AI keeps learning, its current shortcomings will become smoother and smarter. A world where AI can mitigate talent shortages and lighten the burden of provider documentation offers tantalizing solutions for healthcare. That means hospitals and healthcare systems can no longer afford to see AI as the future – it’s already part of our present and both the advantages and dangers are big enough to demand tackling now.