23 Comments

You need to include a typo in each column just as proof of life ; )

Expand full comment
author

That was one clue it wasn't me :)

Expand full comment

HEY! I thought you didn’t read your comments! 😡

Expand full comment

Are you sure that was him?

Expand full comment
founding

I have read every Vinay substack and commented on many. I have corrected many errors (they are always there, sometimes egregious). I have NEVER seen him comment (may have missed one after I read/commented) on anything, anytime. That is unfortunate for him and for his subscribers but it is what it is. So hard to know how to work this comment into the understood cadence. Perhaps answered by ChatGPT?

Expand full comment

Nothing would surprise me at this point.

Expand full comment

😆

Expand full comment

I was pretty unimpressed with that post, tbh. Don’t agree with some of it (grasp of biology has to come first) and much of it was boring. The real Vinay does better.

Expand full comment

My chat CPT response to your chat GPT:

Evidence based medicine - when RCT’s aren’t funded by profiteers. We don’t get to see the evidence that “fails” because many pharma funded studies that don’t get desired results never see the light of day. So we really need to define what qualifies as evidence before going further.

My real response: DON’T USE CHAT GPT! 🥸

Expand full comment

The prompt to ChatGPT was almost as long as the ChatGPT essay!

And to be honest the logic was not quite at the same level as Vinay's (but that's the penalty for eliminating the typos!).

Expand full comment

By the way, the other issue with ChatGPT, used in this manner, is that the art of writing will be completely lost, and with that original ideas will probably go by the way side too.

But I can imagine as an aid in clinical medicine and differential diagnosis, it may be quite useful if applied properly and as an aid rather than a substitute.

Expand full comment

Precisely! This piece is by walter kirn and touches on your point. Highly recommend reading. It’s outstanding.

https://open.substack.com/pub/walterkirn/p/project-parakeet?r=88dyo&utm_medium=ios&utm_campaign=post

Expand full comment

This was a great idea. We are clearly on the cusp of something "big". The post illustrates it powerfully.

The industrial revolution make bespoke manufacturing redundant (except for luxury goods). Now we face a Luddite backlash against this revolution. Many risk obsolescence. Jouralists? Lawyers? even Doctors? I hope you have a good pension plan Vinay.

Expand full comment

I started playing with ChatGPT a few weeks ago. I thought it was a toy at first, but then I talked to some friends who use it to draft up legal briefs then can then hand over to a paralegal. Since then I have used it to write PHP and Python code for hobby projects, and also to help me learn Python. I've been trying to learn Python for years, but now with ChatGPT I literally have a personal tutor for the first time.

I now believe the ability to take advantage of tools like ChatGPT will be as important as the ability to use Google or other search engines. I have challenged the people who work for me to figure out ways they can employ ChatGPT to leverage or augment their effort.

Personally, I am completely comfortable composing and writing letters, reports, etc; I have a lot of confidence in my writing ability (I used to do it professionally, after all). So ChatGPT isn't of much use for me there. But I had an employee until a few years ago who was, well, not illiterate, but she couldn't write worth a damn. She would send out letters under our company logo that frankly embarrassed me. I never said anything about it because I didn't want to hurt anyone's feelings. Today, I would absolutely suggest she run everything through ChatGPT: it's a tool like Excel or Word. You just write up whatever you want to say, with the sentence fragments and wonky capitalization, and ask ChatGPT to clean it up. It's great.

One problem with ChatGPT, and the main reason I am unlikely ever to use it to write for me, is I loathe its smarmy recent-humanities-graduate tone. I prefer my own tone. But I've experimented and you can ask it to write in the style of the New York Times or the style of the LA times, and it's much, much better (I prefer the LA times style to the NY Times).

I truly believe ChatGPT is one of the most important information tools that has come along since the electronic spreadsheet, and only a fool would eschew it on some weird principle or other. One of the folks I follow on YouTube said this week, "I've been asked whether ChatGPT will replace programmers. I don't think so. I think programmers who use ChatGPT will replace programmers who don't."

Expand full comment

Interestingly enough, the original article fell flat to me and felt off somehow. So much so that I wondered if I should stop subscribing to Vinay’s substack. I couldn’t put my finger on it but now it makes sense.

Expand full comment
May 23, 2023·edited May 23, 2023

When I read the "Rethinking..." post, I wondered if part of it was missing, or if I hadn't read carefully (I was reading on a phone with a horribly cracked screen). I edit a lot of public-facing documents so I am typically a good reader, but this one made me think I wasn't fully on the ball. I couldn't articulate what bothered me, and still can't, other than it just felt not quite...present. Overall my response was "wait...what?"

Expand full comment

I actually thought this article was one of your weaker ones. The idea was legitimate, but the execution was weak. Too many cliches and tired old metaphors: "right from the get-go".."missing the forest for the trees".... "gold standard"... "North Star".... "cold, hard data".."Here's the rub"...."crux of our argument"... "linchpin" ... "ultimate goal".... "all stand behind". I could add more. I'd have to say I was fooled, and just thought it was a rather weak column. I have learned not to expect too much from journalist these days, although the standards in substack are much higher. Thanks for being honest about it. It was a good experiment, and I have to admit, if we were all consigned to forever reading materials created by ChatGPT or similars, our brains would suffer from the same effects our bodies suffer from from eating GMO food, the American diet, and the other crap that the food establishment tries to pass off as true nourishment, when our bodies know it is anything but.

Expand full comment

Considering Vinay’s thoughts on this topic are known to not just Timothee but to a lot of us. So it is natural for some to feel tricked.

In retrospect, it seems many of us could have taken the position that the piece was trite. Even banal. However, that would have meant renouncing tribal association. In effect, ChatGPT had the effect of muting debate, engendering conformity

Expand full comment

I'm waiting for somebody to cleverly write an editorial in some junk journal in order to "put into circulation" some totally nonsense bullshit term in the world of surgery -- in other words, to intentionally Plant A Tracer Term. I propose today using the neologistic term "glikxer" in place of the term "gallbladder". We wait for a month after the junk editorial comes out, then we ask ChatGPT to write a short essay on "recent developments in complications of glikxer surgery". To perform a Control Run, I have already asked ChatGPT this morning to write a short essay on "What is glikxer"?

Expand full comment

And, within 20 seconds of querying ChatGPT I received a multi-paragraph "essay" from the AI gadget that went on and on and on about a company named Glikxer that has run into hard times, lost money, etc. But not a single mention of surgery involving the biliary tract. The only problem I have with this AI answer I have in front of me is that ** there does not seem to be any such company** based on searches of internet info. I am led to recall that old aphorism, "You cannot bullshit a bullshitter".

Expand full comment

In retrospect, there was a clue. Every tweet, paper or post of yours has something novel. This suspiciously sounded like a review article. Maybe that was the intent. However, even though your research is meta, it tends not to sound like a typical review article.

The post could have used examples of innovation in medical education, pilots that are underway etc. However, that would have been a deviation from your past publications on the topic. I wonder if ChatGPT is reading podcast transcripts, social media posts in addition to other publicly available content. Also, will it be able to do justice to authors whose content is mostly paywalled?

I’ve been down this path myself. Used ChatGPT to compose an e-mail and ran it by family. Feedback was : although prose was polished and ‘lawyerly’, the e-Mail was devoid of human connection (did not say something known only to the author and reader). Also that savvy folks who see and hear me in-person will detect a disconnect between the polished message and error prone, unpolished human that I am.

Expand full comment

Major mistake in general Doc.

Way over your head I’m afraid.

Sad

Expand full comment

Although well written it just didn't ring true, either from my own clinical experience or from familiarity with your style over the past 2-3 years

Expand full comment