Dr. Sina Bari, a working towards surgeon and AI healthcare chief at knowledge firm iMerit, has seen firsthand how ChatGPT can lead sufferers astray with defective medical recommendation.
“I just lately had a affected person are available, and once I really useful a medicine, they’d a dialogue printed out from ChatGPT that mentioned this treatment has a forty five% likelihood of pulmonary embolism,” Dr. Bari informed TechCrunch.
When Dr. Bari investigated additional, he discovered that the statistic was from a paper concerning the affect of that treatment in a distinct segment subgroup of individuals with tuberculosis, which didn’t apply to his affected person.
And but, when OpenAI introduced its devoted ChatGPT Health chatbot final week, Dr. Bari felt extra pleasure than concern.
ChatGPT Well being, which is able to roll out within the coming weeks, permits customers to speak to the chatbot about their well being in a extra non-public setting, the place their messages received’t be used as coaching knowledge for the underlying AI mannequin.
“I believe it’s nice,” Dr. Bari mentioned. “It’s one thing that’s already occurring, so formalizing it in order to guard affected person data and put some safeguards round it […] goes to make it all of the extra highly effective for sufferers to make use of.”
Customers can get extra personalised steering from ChatGPT Well being by importing their medical data and syncing with apps like Apple Well being and MyFitnessPal. For the security-minded, this raises rapid purple flags.
Techcrunch occasion
San Francisco
|
October 13-15, 2026
“Abruptly there’s medical knowledge transferring from HIPAA-compliant organizations to non-HIPAA-compliant distributors,” Itai Schwartz, co-founder of information loss prevention agency MIND, informed TechCrunch. “So I’m curious to see how the regulators would strategy this.”
However the best way some business professionals see it, the cat is already out of the bag. Now, as an alternative of Googling chilly signs, persons are speaking to AI chatbots — over 230 million people already speak to ChatGPT about their well being every week.
“This was one of many largest use instances of ChatGPT,” Andrew Brackin, a companion at Gradient who invests in well being tech, informed TechCrunch. “So it makes a number of sense that they might wish to construct a extra sort of non-public, safe, optimized model of ChatGPT for these healthcare questions.”
AI chatbots have a persistent downside with hallucinations, a very delicate challenge in healthcare. In accordance with Vectara’s Factual Consistency Evaluation Model, OpenAI’s GPT-5 is extra vulnerable to hallucinations than many Google and Anthropic fashions. However AI corporations see the potential to rectify inefficiencies within the healthcare area (Anthropic additionally introduced a well being product this week).
For Dr. Nigam Shah, a professor of medication at Stanford and chief knowledge scientist for Stanford Well being Care, the lack of American sufferers to entry care is extra pressing than the specter of ChatGPT dishing out poor recommendation.
“Proper now, you go to any well being system and also you wish to meet the first care physician — the wait time will likely be three to 6 months,” Dr. Shah mentioned. “In case your selection is to attend six months for an actual physician, or speak to one thing that isn’t a health care provider however can do some issues for you, which might you decide?”
Dr. Shah thinks a clearer path to introduce AI into healthcare techniques comes on the supplier aspect, quite than the affected person aspect.
Medical journals have often reported that administrative duties can eat about half of a main care doctor’s time, which slashes the variety of sufferers they’ll see in a given day. If that sort of work may very well be automated, docs would have the ability to see extra sufferers, maybe lowering the necessity for individuals to make use of instruments like ChatGPT Well being with out extra enter from an actual physician.
Dr. Shah leads a crew at Stanford that’s growing ChatEHR, a software program that’s constructed into the digital well being report (EHR) system, permitting clinicians to work together with a affected person’s medical data in a extra streamlined, environment friendly method.
“Making the digital medical report extra consumer pleasant means physicians can spend much less time scouring each nook and cranny of it for the data they want,” Dr. Sneha Jain, an early tester of ChatEHR, mentioned in a Stanford Drugs article. “ChatEHR might help them get that data up entrance to allow them to spend time on what issues — speaking to sufferers and determining what’s occurring.”
Anthropic can be engaged on AI merchandise that can be utilized on the clinician and insurer sides, quite than simply its public-facing Claude chatbot. This week, Anthropic introduced Claude for Healthcare by explaining the way it may very well be used to cut back the time spent on tedious administrative duties, like submitting prior authorization requests to insurance coverage suppliers.
“A few of you see tons of, hundreds of those prior authorization instances every week,” mentioned Anthropic CPO Mike Krieger in a latest presentation at J.P. Morgan’s Healthcare Conference. “So think about reducing 20, half-hour out of every of them — it’s a dramatic time financial savings.”
As AI and medication change into extra intertwined, there’s an inescapable rigidity between the 2 worlds — a health care provider’s main incentive is to assist their sufferers, whereas tech corporations are finally accountable to their shareholders, even when their intentions are noble.
“I believe that rigidity is a crucial one,” Dr. Bari mentioned. “Sufferers depend on us to be cynical and conservative to be able to defend them.”

