Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I just had a pretty amazing 4 hour session with gpt 5.1 going over my son's rare disease. Chat broke it all down for me in a really deep and clear way in the back and forth. Insights I've never gotten to from talking to docs, reading papers, reading bio textbooks etc.

I guess some small percentage of it was hallucinated, but if you want to call it a teacher/student relationship, it was pretty amazing.



There's no problem with that.

It's when you take that conversation you just had, make it into a PowerPoint, and try to sell it for 10000x what you spent on the credits that it really becomes lazy. Why expect anyone pay for that when they could have just asked the AI themselves?


I mean, to be fair, a lot of economic activity is like that. Why pay thousands of dollar for a plumber or an electrician? Most of what they do is going to Home Depot, buying a $15 part, and replacing it. But it's one less skill for a homeowner to learn, so you delegate.

The problem isn't that someone learns to prompt AI and is selling the output. The problem is that the customer thinks they're paying for human instruction and they're getting something else.


>> It's when you take that conversation you just had, make it into a PowerPoint, and try to sell it for 10000x what you spent on the credits that it really becomes lazy. Why expect anyone pay for that when they could have just asked the AI themselves?

> I mean, to be fair, a lot of economic activity is like that. Why pay thousands of dollar for a plumber or an electrician? Most of what they do is going to Home Depot, buying a $15 part, and replacing it. But it's one less skill for a homeowner to learn, so you delegate.

That's not what you pay the plumber for. You pay the plumber for the training and years of experience needed to 1) correctly diagnose the issue and identify the part to replace, 2) install it correctly so it won't fail in 10 years, and 3) do it all efficiently with a minimum of collateral damage. It's not practical for a homeowner to develop that level of skill without becoming a plumber themselves. A homeowner can do a lot, but even with YouTube there will be a lot of deficiencies in all those areas compared to someone who knows what they're doing.


You mostly don't. Household plumbing and wiring is nowhere the near of complexity of, say, a lawnmower engine. It's mostly "the p-trap is leaking", "the faucet is leaking", "the toilet's flush valve is broken", "the wiring nut is loose". The kind of stuff you don't need skill to fix properly.

There are situations where you do want to call a pro, but it's not what you usually call them for. Most of it is one notch above "my trash can is full, can you come over and empty it".

Besides, your argument applies to the AI case too. You're not paying for AI output. You're paying for my expertise in prompting and my ability to evaluate the output and say "yep, that's right".


But you aren't a student paying for a university education presumably taught by someone that has experience in the field.

Perhaps the insights are good or bad and that's fine if you can correct later with a conversation with your doctor. But would you want a doctor trained by the same AI?

Importantly, you have no idea what part or what percentage of the conversation was accurate. How much of it was a hallucination from a chi manipulator? How much of it was based on dated research? How much of it came from a random blog post by a crazy person?


I have the sense overall, from talking to it about aspects of his condition that I understand well, and also using it as a coding assistant in work, that by and large it's on point.


I somewhat suspect you would not find it so amazing if you paid £9000/year for it, though.


I've had this experience as well, but I also noticed I am much less blown away when the information is put to the test.

So I don't trust it anymore, at best it's a good start.


What kind of area were you talking to it about?


Well, that's certainly one piece of anecdata. I guess that refutes the experience of all the college students.


sorry, i didn't even read the article because I have everything blocked. From the comments, whatever it is it sounds terrible.


Next try it on something rare you're an expert in, and be amazed at the low quality.


i quiz it often on aspects of my son's condition that I understand, and it gets things right most of the time, with the occasional glaring bit of misinformation.


It will work better if you try it with something you know really well.


> Insights I've never gotten to from talking to docs, reading papers, reading bio textbooks etc.

Curious why do you think you not have gotten these insights that too even from textbooks no less?

> I guess some small percentage of it was hallucinated

You guess it is a small percentage? How much do you guess this small percentage was the most important part about this disease?

Because it seems like you were taken in by the empathic answers by gpt and think it got things mostly right.


> You guess it is a small percentage?

Well I am operating within a space where his doctors are setting the parameters in terms of the pathways targeted, the therapies offered, etc. And I'm asking, how does this therapy work? How is it related to X and Y? How strong is the evidence? Questions like that. I think I can throw the appropriate grain of salt on it, but yeah, some fake facts could creep in. It's stuff that will be validated, but super valuable to just synthesize the lay of the land and give me context for understanding what the docs are saying.

> Curious why do you think you not have gotten these insights that too even from textbooks no less?

Partly just how tailored and conversational ChatGPT is. It gets right at my needed level of explanation. It knows how much education I have. It remembers salient details about my son and his condition. It really explains things well. It knows so much. It's quite remarkable.

[note, i didn't read the article so am not opining on its content in any way. An AI college education sounds terrible in many ways]


> It remembers salient details about my son and his condition. It really explains things well. It knows so much. It's quite remarkable.

Yes but that doesn't mean the answers are right. And while you acknowledge it might be throwing some fake facts in, you don't know what those might be. You still seem to think it is remarkable just because

> It gets right at my needed level of explanation.

And similar issue the AI college education seems to be going through. Someone somewhere created the college slides using AI content and voiceover, didn't know any better but said - Hey, that sounds near to what might be taught in this class. Close enough. Good for college education.


This type of use while asking for annotations for all facts can be insightful to have the start of an informed conversation with a professional.


I am using it mostly to try to understand and get context about what the professionals tell me, the treatments they are offering, etc.


As long as we're reciting unrelated anecdotes to attempt emotional appeal over anything of substance, "Chat" has also driven young children to suicide. No one in their lives would have paid as much detail into reaffirming every negative thought pattern they had until they ended their own lives as much as an emotionless token predictor LLM running 24/7. They got their nihilism and idealization pushed back at them in a clear, cold tone they would have never gotten from docs, reading papers, reading textbooks. I dont think their outcomes were as amazing. Their mental condition wasnt even as rare of a circumstance to properly handle and treat, yet it failed at a fatal level.

I dont know why asking to not have LLMs shoved in your face in every facet of society always has to be rebutted by a subjective miracle experience akin to openai PR statements. You can have these conversations with chatgp- oh, your friend "Chat" without it being proliferated to education platforms. These concepts aren't in conflict with eachother, yet by coming to defense of their use as a mandated platform you want to brush away critique for reasons that are unknown to anybody else. Go ahead and keep talking to "Chat", maybe let the people suffering at the hands of low quality slop have their voice and chance to speak?


> I guess some small percentage of it was hallucinated,

Which percentage?

And how important/misleading was that particular percentage?

(As an aside, personally, I precent the term 'bullshits' to 'hallucinating'; the latter is the daft Silicon Valley term.)


You're absolutely right. I should have used micrograms instead of milligrams in the prescription. /s Sam-abomination-5.1


i guess i'd rather have the understanding it's offering me, with a smudge of, sure, call it bullshit, than not know it at all.

If you think about it, there is no source out there that is unimpeachable, and there is a need to consult many sources to get closer to the truth. triple all that for a rare disease.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: