Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.

Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go (even when the answer is factually wrong !), it doesn't have to be obvious leading but just framing the question in terms of mentioning all the symptoms you now know to be relevant in the order that's diagnosable, etc.

Not saying that's the case here, you might have gotten the correct answer first try - but checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history.



In January my daughter had a pretty scary stomach issue that had us in the ER twice in 24 hours and that ended in surgery (just fine now).

The first ER doc thought it was just a stomach ache, the second thought a stomach ache or maybe appendicitis. Did some ultrasounds, meds, etc. Got sent home with a pat on the head, came back a few hours later, still no answers.

I gave her medical history and all of the data from the ER visits to whatever the current version of ChatGPT was at the time to make sure I wasn’t failing to ask any important questions. I’m not an AI True Believer (tm), but it was clear that the doctors were missing something and I had hit the limit of my Googling abilities.

ChatGPT suggested, among an few other diagnoses, a rare intestinal birth defect that affects about 2% of the population; 2% of affected people become symptomatic during their lifetimes. I kind of filed it away and looked more into the other stuff.

They decided it might be appendicitis and went to operate. When the surgeon called to tell me that it was in fact this very rare condition, she was pretty surprised when I said I’d heard of it.

So, not a one-shot, and not a novel discovery or anything, but an anecdote where I couldn’t have subconsciously guided it to the answer as I didn’t know the answer myself.


Malrotation?

We had in our family a “doctors are confused!” experience that ended up being that.


Meckel’s diverticulum


Indeed is is very easy to lead the LLM to the answer, often without realizing you are doing so.

I had a long ongoing discussion about possible alternate career paths with ChatGPT in several threads. At that point it was well aware of my education and skills, had helped clean up resumes, knew my goals, experience and all that.

So I said maybe I'll look at doing X. "Now you are thinking clearly! This is a really good fit for your skill set! If you want I can provide a checklist.". I'm just tossing around ideas but look, GPT says I can do this and it's a good fit!

After 3 idea pivots I started getting a little suspicious. So I try to think of the thing I am least qualified to do in the world and came up with "Design Women's Dresses". I wrote up all the reasons that might be a good pivot (i.e. Past experience with landscape design and it's the same idea, you reveal certain elements seductively but not all at once, matching color palettes, textures etc). Of course GPT says "Now you are really thinking clearly! You could 100% do this! If you want I can start making a list of what you will need to produce you first custom dresses". It was funny but also a bit alarming.

These tools are great. Don't take them too seriously, you can make them say a lot of things with great conviction. It's mostly just you talking to yourself in my opinion.


  >checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history
So... exactly the same behavior as human doctors?


Exactly the same behavior as a conversation with an idealized human doctor, perhaps. One who isn't tired, bored, stressed, biased, or just overworked.

The value that folks get from chatgpt for medical advice is due in large part to the unhurried pace of the interaction. Didn't get it quite right? No doctor huffing and tapping their keyboard impatiently. Just refine the prompt and try as many times as you like.

For the 80s HNers out there, when I hear people talk about talking with chatgpt, Kate Bush's song A Deeper Understanding comes immediately to mind.

https://en.wikipedia.org/wiki/Deeper_Understanding?wprov=sfl...


ChatGPT and similar tools hallucinate and can mislead you.

Human doctors, on the other hand ... can be tired, hungover, thinking about a complicated case ahead of them, nauseous from a bad lunch, undergoing a divorce, alcoholics, depressed...

We humans have a lot of failure modes.


Human doctors also know how to ask the right follow-up questions.


Yeah, my son's issue was rare and congenital. I wish I still had the conversation, but I can't remember which LLM it was and it's not in either my Claude or GPT history. It got it in two shots.

1. I described the symptoms the same way we described it to the ER the first time we brought it in. It suggested all the same things that the ER tested for. 2. I gave it the lab results for each of the suggestions it made (since the ER had in fact done all the tests they suggested).

After that back and forth it gave back a list of 3-4 more possibilities and the 2nd item was the exact issue that was revealed by radiology (and corrected with surgery).


> Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go

This goes both ways, too. It’s becoming common to see cases where people become convinced they have a condition but doctors and/or tests disagree. They can become progressively better and better at getting ChatGPT to return the diagnosis by refining their prompts and learning what to tell it as well as what to leave out.

Previously we joked about WebMD convincing people they had conditions they did not, but ChatGPT is far more powerful for these people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: