58 Comments
founding
Mar 29, 2023·edited Mar 30, 2023

Vinay, This is my area of expertise on which I have been publishing since the very first article on Medical Informatics I wrote for Science many decades ago. ChatGPT, on which I have pretty deep experience, is a language engine, not an all-knowing "artificial intelligence". As you play with it more, you will discover that we have yet to achieve "artificial stupidity"....and it is a long slog from there to anything "intelligent".

I am shocked to see you endorsing "cookbook medicine" as the better default for the future. Without an ontologic understanding of what is happening with each patient, ONE BY ONE, there can be words that sound good but that will likely be wrong. None of that is today possible (or even being pursued) in large language engines. The cookbook will be often wrong in unpredictable ways. I know that the government (witness the cookbook response to covid) would love this kind of health care...but it is wrong on every level.

Half a century ago there was a program called Eliza that, similarly (but much more simply) did psychiatric interviews with users. It was very persuasive (and people would interact with it for hours) but, in actuality, had nothing to contribute. Perhaps large language engines may facilitate writing notes and replace scribes...I agree that much medical documentation is nonsense. But the leap to "doctoring" and the default that doctors will just be dumber (yes, happening now but that is LCME's fault and the failure of schools like UCSF (and mine) to push back) but cookbook language processing will take over is a leap way too far.

Expand full comment

About 80% of office/video visits for routine ailments today can be handled by a nurse practitioner. They don't require much more 'intelligence' than that of a search engine or a language processing engine. Therefore, most of these visits can also be handled by chatGPT.

Today, you go to the general practitioner (GP) first and then they refer you to a specialist in, say, 10-20% of the cases. In near future, you could go to an AI doctor first, and it will refer you to a GP or a specialist in, say, 20% of the cases.

Expand full comment
founding

There is some truth to this. What is missing is the human ability to do pattern matching that will never be part of such a visit. I recall as a resident being astounded when some attendings could walk into a room, look at a normal-appearing patient, and say "cancer" and mostly be correct. When I asked them how they knew, they said "it just looks that way". I swore I would never do that to my residents but, sadly, I do -- it is one of those poorly understood human skills.

So could there be some screening like this? Sure -- many places actually do this now with simple branch-chain logic engines. I am sure it can be made more sophisticated. But that is not where Vinay appears to be going.

Expand full comment

Dr. K -- I agree with your brilliant (and obviously well-informed) observations. Eager to learn more about you, I asked...ChatGPT: "Dr. K" is most likely referring to Dr. Kavita K. Mishra, who worked on IBM Watson's Medical Sieve project. Dr. Mishra is a medical doctor and a data scientist who specialized in the application of machine learning and artificial intelligence to improve healthcare outcomes. She was part of the team that developed Watson for Oncology, a system that uses AI to provide personalized cancer treatment recommendations based on a patient's medical history and other data. Watson for Oncology has been implemented in hospitals around the world and has helped oncologists to make more informed treatment decisions.

Expand full comment

Speaking of Watson, I heard Watson either could, or was hoping to "look" at a person walking and diagnose just based on their walk. Is that true, and when should it be available in ChatGPT? I know ChatGPT-4 has enhanced visual capabilities.

I've thought of that "look at someone walking" function for myself. At age 67, sometimes my walk is youthful and athletic; sometimes I'm injured and walk like a senior. It would be interesting if ChatGPT would say, "Does your left knee hurt? When it feels better, try some forward pushing and backward pulling on a light 'sled.' "

Expand full comment

You have it backwards. Your evaluation tells us more about the state of medicine than it does about the promise of AI.

Expand full comment

This was my observation when I tried Chat GPT 3.5, but I changed my mind with GPT 4. Chat GPT 3.5 is good with words, writes convincingly, but it is short of meaning. GPT 4 is however a step change. It can solve math problems with insights and a thinking process that easily passes as intelligence in fractions of a second. Even some difficult math problems (International Math Olympiad caliber). So it can definitely do inference and extract intelligent, meaningful insights. And for medicine, it would have read ALL the published medical articles, and medical manuals, which is something that most doctors did not do.

The ML research community is stupefied. Theoretically the tech behind Chat GT should not work to provide intelligence as it has serious limitations, but there is evidence that it does work and they do not know why. This is still early, and we may find out that this layer of "intelligence" is indeed fake and very limited. But this could also go in the direction that Vinay predicts, that medical inference is going to fundamentally change as AI may set a higher bar than the current average.

Expand full comment
founding

I agree that 4.0 is better than 3.5. But that misses my point. Medical textbooks have all knowledge about health care as well, after a fashion. But knowing everything in all the textbooks and all the literature tells you nothing about what to do with THIS PATIENT HERE NOW. We did extensive work with Watson on this (speaking of earlier inferencing engines). 4.0 may know everything about medicine but it knows NOTHING about you. Extracting information about you from the medical record which is highly fragmented and has a long series of issues (I won't bore you) has been fraught and is unamenable to inferencing -- we showed this clearly with Watson (which is why it failed despite Rometty's pouring billions into it).

As vonEye showed eloquently long ago, knowing everything about the universe tells not one single thing about YOU. Knowing everything about you (and all the other yous) can tell you everything about the universe...but is not amenable to inferencing because you are an N of 1. In fact, all good healthcare is an N of 1. That is the bridge too far that ChatGPT 97.5 will not be able to cross either, irrespective of its ability to provide language extracts of increasing sophistication.

Expand full comment

I must tell you that its’ applications in FQHC’s are innumerable. The underserved simply are not getting the intellect that we expect of a first class physician. In part because it is hard to recruit top level physicians to FQHC’s and because of government bureaucratic requirements that repress high level thought. (Talk about cookbook!) In addition, FQHC’s rely strongly on inexperienced mid levels who are working off student loans. For this environment, the current level of AI can provide an order of magnitude improvement in knowledge base and to a large extent basic reasoning.

Expand full comment

"... medical inference is going to fundamentally change as AI may set a higher bar than the current average."

Very interesting way to look at the bigger picture. In other words, AI could be a 'better' doctor in most cases of routine ailments. And AI will certainly avoid human errors that often arise from ignorance, negligence, stress of overworking, etc.

Also, the AI doctor could be used as a source of second opinion in many cases. Today, many people avoid getting a second opinion becasue of the cost in money and time.

Expand full comment
founding
Mar 29, 2023·edited Mar 29, 2023

Again, this misses a pivotal point. The AI may be "book smart" smarter than the doctor standing there when smarts is about knowing all the medical literature. It is unlikely to be able to synthesize an N of 1 picture of you and that is the only important component. (See previous comments.)

Larry Weed tried the "second opinion" shtick with a not-bad expert AI engine many years ago. He had something called Physician Knowledge Couplers (PKC) that could really do a masterful job at enhancing diagnosis/therapeutic choices in a whole set of verticals (e.g., neurology). But no one would use them. The generalists did not use them because if the problem in this patient were that complex, a referral to a specialist was called for. The generalist was not going to rely on this beyond his/her depth engine to make decisions. The specialists would not use the PKC because they knew essentially as much and the marginal benefit was zero or too small to measure. So Larry's company went bankrupt.

But the fundamental issue is still the conversion of each individual patient record to a form amenable to processing. To date, Epic, IBM, Microsoft, Google, Oracle and many others have flat-out failed at this despite decades of effort. Doctors still get random sets of pdfs from here and there from which they have to synthesize the clinical state of the patient in front of them now. All of the inferences on the latest NEJM genomics article on some disease make no difference if they do not apply to the patient sitting on the table.

Expand full comment

Your observations and experience with Watson do provide better context. Not sure how any machine much less a doctor can deal with 3" think medical file assembled somewhat at random.

Expand full comment

I'm not sure that as things stand AI could be a better doctor even in cases of routine ailments. It occasionally randomly makes mistakes and makes them with absolute certainty.

As well, imagine a patient presenting for with symptoms they describe which point to a routine ailment. But the physician notices another salient sign that the patient either hasn't noticed, or doesn't consider important, but which changes the diagnosis to something more serious.

As things stand there's no way for LLM AI to notice that sign. And no way to force it to independently consider any sort of analytic diagnosis as far as I know.

We'll see what comes.

Expand full comment

If I closed my eyes and listened to this I would think you are describing most doctors my family and I have seen.

The doctors I've seen randomly make mistakes with absolute certainty. They ignore the patients' complaints/symptoms and assume they already know the right answer (which they usually don't).

My point is you are arguing against humans and AI bots with your reasoning.

Expand full comment

This is a good exposition of my point of view. Especially the recounting of the pregnant woman, where although the doc doesn't state it, there was likely also a reading of body language and eye contact

https://nypost.com/2023/02/05/chatgpt-is-dangerous-but-not-in-the-way-you-think/

Expand full comment
founding

This is entirely correct. And coupled with the lack of understood, complete and correct (or anything close) information to be provided is why this is generally way overstated. See my other comments in this thread...

Expand full comment

Just commenting on the fist paragraph... YES, ChatGPT is a narrow implementation of AI. To anyone reading this comment, there are AI survey classes available on Coursera (free - create an account & audit the course).

Broad AI may be 100’s to 1000’s of years out, especially if STEM continues to degrade in the West

Expand full comment

I don't know, "cookbook medicine" is better than "bad medicine"! I had a patient who was referred to a pain medicine specialist with left shoulder pain... He got an steroid infiltration with mixed results. A month later he died of a heart attack.

Here is ChatGPT's answer when provided with a very brief medical history. Isn't that better? Vinay's point is that it will improve mediocre medicine, ant I think it will:

Chat GPT:

As an AI language model, I do not have access to the patient's full medical history and cannot provide medical advice. However, I can provide some general information on the possible differential diagnoses for left shoulder pain in a patient with the given medical history.

Some potential causes of left shoulder pain in an obese hypertensive smoker could include:

Angina: Chest pain or discomfort that occurs when the heart muscle doesn't receive enough oxygen-rich blood due to narrowed or blocked arteries. Angina can radiate to the left shoulder, arm, or jaw.

Heart Attack: A heart attack occurs when the blood flow to a part of the heart is blocked, and the heart muscle is damaged or dies. Symptoms may include chest pain or discomfort, shortness of breath, nausea, sweating, or left shoulder pain.

Shoulder impingement syndrome: Shoulder impingement syndrome occurs when the tendons or bursa in the shoulder joint become compressed or pinched. This can cause pain in the shoulder and upper arm, especially with overhead activity.

Rotator cuff injury: A rotator cuff injury occurs when the tendons and muscles that attach the shoulder blade to the upper arm bone are damaged. This can cause pain and weakness in the shoulder and arm.

Cervical spine issues: Issues with the cervical spine can cause pain in the shoulder and arm, especially if there is nerve compression. This could include conditions such as cervical radiculopathy or a herniated disc.

Expand full comment
founding

The bad doctors along the way who missed the indicator would have ignored the ChatGPT discussion as well. If you do not know that referred shoulder pain is an indicator for myocardial ischemia, you should not be practicing at all.

These are failings of the current educational system and are only getting worse as LCME changes medical school from being learning/quality oriented to being demographic/woke oriented. I have written extensively on this here and there. But throwing additional facts in front of those incompetent to deal with them is not the solution.

Expand full comment

I agree 100% with your assessment of the current educational system. Last year, the Faculty of Medicine to whom my hospital is affiliated completely dropped the grades in favor of a pass/fail system for med students. This makes the match for residency a nightmare and the hardest hit (besides patients in a few years) are the candidates themselves. This is designed to boost the admission criteria you mention, wokism/identity politics instead of objective expertise.

BUT, I continue to believe that in these worsening conditions ChatGPT could sometimes make a difference with "algorithmic" decisions (most appropriate antibiotherapy, anticoagulant dosage), or a differential based on past history. Sure ideally incompetents should be sorted out, but the reality is that I see patients harmed by bad medicine every week.

Maybe "mediocre physicians" will not be able to ignore ChatGPT suggestions, if they automatically appear on the chart once the past history and the physical exam are done?

Expand full comment
founding

There is some merit in what you say. But some of those recommendations will be wrong (you would be amazed how easy it is to get ChatGPT4 to lie to you/tell you what you want to hear -- just like Eliza 40 years ago) and the patients will have untoward results. Who gets blamed then? Interesting times.

Expand full comment

“This point has been made by others, but the deeper point I want to make is just how silly it is we spend so much time documenting. This is largely done for billing purposes, and these documents are mostly read by no one. This entire billing system needs reformation.”

This is so true. Imagine if lawyers had to document this thoroughly for every hour they bill us, or if your mechanic had to write a paragraph justifying each oil change… it’s just insane and I can’t believe this is real life

Expand full comment

If people paid directly for their doctor's time, the way they do for legal or mechanical services, then senseless documentation wouldn't be a thing. The issue is that the payment is indirect, via an insurer. That's the root cause.

Expand full comment

I asked this:

agencies that promulgate medical guidelines in the United States, Canada and Europe that do not received pharmaceutical funding

answer from Chat gpt:

There are several agencies that promulgate medical guidelines in the United States, Canada, and Europe that do not receive pharmaceutical funding. These agencies include:

United States:

American Diabetes Association (ADA)

American College of Cardiology/American Heart Association (ACC/AHA)

National Institutes of Health (NIH)

Centers for Disease Control and Prevention (CDC)

US Preventive Services Task Force (USPSTF)

Canada:

Canadian Diabetes Association

Canadian Cardiovascular Society

Canadian Agency for Drugs and Technologies in Health

Europe:

European Association for the Study of Diabetes (EASD)

European Society of Cardiology (ESC)

European Medicines Agency (EMA)

National Institute for Health and Care Excellence (NICE)

Scottish Intercollegiate Guidelines Network (SIGN)

These agencies are non-profit organizations or government entities that are dedicated to promoting evidence-based medicine and providing guidance for healthcare professionals. They may receive funding from other sources, such as government grants or donations from private foundations, but they do not receive funding from pharmaceutical companies.

It's important to note that while these agencies strive to provide unbiased and independent guidance, they may still have conflicts of interest or be influenced by other factors. Patients and healthcare professionals should always carefully evaluate the evidence and consider multiple sources of guidance when making healthcare decisions.

Then asked this:

tell mee about conflicts of interest for American Diabetes Association and pharmaceutical companies

Chat GPT answer:

The American Diabetes Association (ADA) is a non-profit organization that is dedicated to the prevention and treatment of diabetes. Like many medical organizations, the ADA receives funding from a variety of sources, including pharmaceutical companies. This funding can be in the form of grants, donations, or sponsorship of events.

While the ADA states that it takes steps to ensure that its relationships with industry partners do not compromise its integrity or independence, there have been concerns about conflicts of interest between the organization and pharmaceutical companies.

For example, a 2016 analysis published in JAMA Internal Medicine found that between 2011 and 2014, the ADA received more than $29 million in funding from pharmaceutical companies. The analysis also found that 22% of the ADA's top leaders had financial ties to pharmaceutical companies.

This has led to concerns that the ADA's recommendations and guidelines may be influenced by the pharmaceutical industry, potentially leading to biased recommendations that prioritize the use of certain medications over others. However, the ADA has stated that its relationships with industry partners do not influence its recommendations or guidelines.

To address concerns about conflicts of interest, the ADA has taken steps to increase transparency and accountability. For example, the organization now requires that its leaders disclose any financial relationships with industry partners, and it has implemented policies to limit the influence of industry on its activities and decision-making processes.

Overall, while conflicts of interest between the ADA and pharmaceutical companies may exist, the organization has taken steps to mitigate these conflicts and to ensure that its recommendations and guidelines are evidence-based and independent.

Expand full comment

Takes some effort to get the bot to drill down. The first answer is clearly wrong. Amazing how biased the answers can be.

Expand full comment

Good article and good speculation, VP. AI will change medicine and just about every other field. At issue is basic human ignorance. Unlimited application of AI like Chat GPT will kill millions. It can disrupt economies and be weaponized. It has some good things going for it that will save time and money. But unlimited use of it, or even frequent use of it will be used (through greed and failure to comprehend) for all manner of gaslighting, deception, psyops, and destruction of truly human processes.

"We'll see", said the wise man.

Expand full comment

Vinay. You are a solid dude but you are super impressed by superficial hype. Teslas, Elon Musk, ChatGPT. This is hype -- it's nothing other than an example of how easily fooled people are. Yawn. Just wait, you'll see what I mean.

Expand full comment

Yousef, I'm sympathetic to your mood as I believe new technology results in usually just "a wash" between the benefits and costs, but do you believe the technologies listed below were mostly hype?

1) The Computer

2) The Internet

3) The Smartphone

Expand full comment

The one thing I would correct is #2. Internet existed for along while. But unless you knew Unix, you weren't going to benefit from any of it. Tim Berners-Lee's HTML *AND* Marc Andressens NCSA Mosaic browser freed the Internet to the masses.

Expand full comment

No, but also having done some work in AI in the past, I also don't see the three points as analogous to the claimed progress of ChatGPT. It's a different beast.

Expand full comment

Musk created a company (three) that everyone said could not be done. Toyota CEO/Chairman said elec cars couldn't be built economically. Reusuable rockets were a stupid idea. So I don't know that Musk is in the category of superficial hype. And his rockets *and* starlink is changing things. Sure, anyone could have done it...but no one did.

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

It's what can be done if someone with a ton of money does it, Musk is simply someone with a couple good ideas and a ton of shadowy money backing him. We can do anything if we throw enough money at it. ChatGPT as an example is not a product of actual insight or depth in computing algorithms, but rather a product of us living in a time when we can buy infinite amounts of computing power.

So, is that really "innovation"?

Expand full comment

Bill Gates didn't. Warren Buffet didn't. Sultan of Brunei didn't. Walmart family didn't. Saud family didn't. Sheiks of UAE didn't. These are the richest people on earth and none of them did it. I get that you clearly don't like Musk. But to say that he's "simply someone with a couple good ideas and a ton of shadowy money backing him" is a laughable take. That's like saying that Henry Ford was just a machinist who got lucky.

Expand full comment

ChatGPT will help the mediocre have an easier time being mediocre. Those how want to excel, want to create, want to inspire, want their reach to exceed their grasp will not choose Chat GPT, as they understand Chat GPT can mimic, but not produce these all too human qualities.

Expand full comment

It is because of the major impact on clinical practice that organized medicine and doctors in general will get it banned, or severely regulated and curtailed. Elon Musk and Steve Wozniak have just come out for a temporary moratorium on further development of AI. It's that big of a threat in non-medical fields. If allowed to progress too far too fast, it may be too late to ban or regulate it.

Expand full comment

Medical writing is so obtuse and ponderous that Chat GPT will easily supersede. It is incumbent upon writers to learn how to think and write more clearly. Otherwise AI takes over.

Same in common journalism. So much of today's journalism fails the basic test of who, what, when, why, and where? Don't make it a guessing game or click bait. That is where GPT will take over.

Expand full comment

It's learning medicine from you and other fellow doctors. There is one issue: the latest medical journals and discoveries are not available to the Chat engine, either with copyright issues or behind a pay wall. It can't know or respond to the latest literature since that part of the Internet is off limits. I think Chat will speed up many aspects of our lives, but it won't necessarily replace them.

Expand full comment

Why worry about copyright issues now? Almost everything it's been trained on is copyright.

Expand full comment

As it stands, Chat can't do citations on academic articles because those are not exposed to 'the internet' because of pay walls etc. The programmers of AI, usually people in India in programming stables doing data entry can't get access.

Expand full comment

Will be wonderful when you allow Chat GPT to finally proofread your articles.

Expand full comment

great read. As a Respiratory Therapist, I asked GPT 4 why we place neo's on PC-CMV mode for non invasive ventilation instead of AC. It told me that CMV and AC were the same mode. I then provided sources to correct this error, and it was able to learn and regurgitate the newly formulated information. creepy, but cool.

Expand full comment

Have you tried GPT -4? It expands future applications in medical practice.In short order one will be able literally speak to it- rapidly eliminating many job positions and EHR’s.

Key is training of physicians and mid levels in its use. I am developing a training program for FQHC’s which are typically behind in everything. As AI is adapted in practice situations, those who do not adapt will quickly be left behind. FQHC quality of care leaves much to be desired already. Rapid utilization of AI in practice will yet further widen the quality gap- yet further leaving the underserved with nth rate care.

Expand full comment

Two words: "stochastic parrots"

Expand full comment

The ChatGPT apocalypse.... not worried. Actual intelligence is a long way off for machine learning.

How many honest, trustworthy assessments of the Covid pandemic and vaccine landscape could ChatGPT actually navigate? I know that I have struggled with all of the "official" misinformation and have learned (through my intelligence) to depend on experts like you, Dr. Prasad... and Dr. Makary and Monica Ghandi and Jay Bhattacharya... but Chat GPT would never recognize the nuances.

Expand full comment
founding

As a matter of fact, having been through the exercise, ChatGPT is programmed to lie to you that everything the "experts" said about covid was correct and the Bhattacharyas and Prasads and whomever were all outliers that should be shot (more or less). Now you are beginning to see why governments support this and why it is foundationally noxious.

Expand full comment

People want to believe that the "All Powerful Oz" knows what he is doing and is working in their best interests.

Expand full comment

Vinay, you are my guy, but this essay disappointed me. I am not a data/statistics nerd like you and you fill this gap in my capabilities. Since you came onto my radar, I have followed you faithfully through the entire Covid pandemic along with others of your ilk, such as Zubin Damania, Marty Makary, Monica Gupta, Scott Atlas, John Iaoniddis, Martin Kuldorff, and Jay Battacharya. You all gave me hope that there were still independent and critically thinking physicians out there pushing back against the groupthink of our bureaucratic public health "experts". I read your essay with dismay, however, and not a little alarm. The picture you painted, intentionally or not, was so dystopic that I had a hard time believing anyone would think this without serious angst at the potential destruction of all that is human and humane in medicine. If there is one common thread in nearly all of the modern progress of medicine it has been an erosion of the most important and sacred thing about it: the doctor/patient relationship. This is built on a foundation of competence, trust, and shared humanity. Dare I say there is also a very real spiritual component to it whether one is a theist, agnostic, or atheist. There is much more to medicine than simple data mining from electronic medical records then parsing this into a plan of treatment according to pre-established algorithms. Like the attending mentioned in one of the earlier comments, after 45 years as a physician, I can walk into a room with a new patient and quickly ascertain a number of important things without much more than a quick glance. Age, sex, race, ethnicity, body habitus, body language, eye contact, demeanor, voice inflection, facial expressions, and more are quickly synthesized into a picture that is then filled in and adjusted based on taking a history and performing an examination. As I learned in medical school, and which as been proven countless times in my own experience, nine times out of ten, the diagnosis can be made by the history and interview alone. All else is just confirmatory. I would strongly recommend you read the book, Mortal Lessons- Notes on the Art of Surgery, by Richard Selzer, if you can find it. Last copyright was 1976 and my copy is falling apart. If you can put your hands on a copy, go to the chapter, The Surgeon as Priest, and read pages 33-36. No AI, ChatGPT, or any other electronic/virtual manner of medical practice will ever come remotely close to approaching, much less surpassing this as an example of the highest level of medical practice. If we lose that to a computer algorithm, we are sunk. Warm regards and keep up the good work. Rick

PS- also written by me........................

Expand full comment

Vinay: Respectfully, if your team produces 200 articles per annum, who's gonna read them?

Another chatGPT instance which will inevitably end up producing readers digest, condensed summaries. We already produce way faster than we can consume.

Written by chatGPT, read by chatGPT seems like a weird way to go about things.

Expand full comment