Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was wondering when they would start with the regulations.

Next will be: you're allowed to use it, if your company/product is "HippocraticGPT"-certified, a bit like SOC or PCI compliance.

This way they can craft an instance of GPT for your specific purposes (law, medicine, etc) and you know it's "safe" to use.

This way they sell EE licenses, which is where the big $$$ are.



  > Next will be: you're allowed to use it, if your company/product is "HippocraticGPT"-certified, a bit like SOC or PCI compliance.
That's exactly how it should be. That's why one needs a license to practice medicine.

I work with a company in this space. All the AI tools are for the professional to use - the patient never interacts with them. The inferred output (I prefer the word prediction) is only ever seen by a mental health professional.

We reduce the professionals' workloads and help them be more efficient. But it's _not_ better care. That absolutely must be stressed. It is more _efficient_ care for the mental health practitioner and for society that can not produce enough mental health practitioners.


What is the value of the certification on the AI output? If OpenAI says they'll get guarantee the output and be accountable for errors, then thats of value, and would justify the certification. Is that the bar?


That's the entire concept of professional licensing, yeah. If you are a licensed professional, you are liable for your work output if it is malpractice.


Well, of course. Are you kidding, the amount of $$$ you would be shifting from countless lawsuits to "deal with openai" or things like that.

I don't believe they will go this far, but somewhere in the middle. And I think they might still be trying to figure it out.


Im working on a PhD concerning LLMs and somatic medicine, as an MD, and I must admit that my perspective is the complete opposite.

Medical care, at the end of the day, has nothing to do with having a license or not. Its about making the correct diagnosis, in order to administer the correct treatment. Reality does not care about who (or what) made a diagnosis, or how the antibiotic you take was prescribed. You either have the diagnosis, or you do not. The antibiotic helps, or it does not.

Doing this in practice is costly and complicated, which is why society has doctors. But the only thing that actually matters is making the correct decision. And actually, when you test LLMs (in particular o3/gpt-5 and probably gemini 2.5), they are SUPERIOR to individual doctors in terms of medical decisionmaking, at least on benchmarks. That does not mean that they are superior to an entire medical system, or a skillfull attending in a particular speciality, but it does seem imply that they are far from a bad source of medical information. Just like LLMs are good at writing boilerplate code, they are good at boilerplate medical decisions, and the fact is that there is so much medical boilerplate that this skill alone makes it superior to most human doctors. There was one study which tested LLM assisted (I think it was o3) doctors VS LLMs alone (+doctors alone) on a set of cases, and the unassisted LLM did BETTER than doctors, assisted or not.

And so all this medicolegal pearlclutching about how LLMs should not provide medical advice is entirely unfounded when you look at the actual evidence. In fact, the evidence seems to suggest that you should ignore the doctor and listen to chatGPT instead.

And frankly, as a doctor, it really grinds my gears when anyone implies that medical decisions should be a protected domain to our benefit. The point of medicine is not to employ doctors. The point of medicine is to cure patients, by whatever means best serves them. If LLMs take our jobs, because they do a better job than we do, that is a good thing. It is an especially good thing if the general, widely available LLM is the one that does so, and not the expensively licensed "HippocraticGPT-certified" model. Can you imagine anything more frustrating, as a poor kid in the boonies of Bangladesh trying to understand why your mother is sick, than getting told "As a language model I cannot dispense medical advice, please consult your closest healthcare professional".

Medical success is not measured in employment, profits, or legal responsibilities. It is measured in reduced mortality. The means to achieve this is completely irrelevant.

Of course, mental health is a little bit different, and much more nebulous overall. However, from the perspective of someone on the somatic front, overregulation of LLMs is unecessary, and in fact unethical. On average, an LLM will dispense better medical advice than an average person with access to google, which is what it was competing with to begin with. It is an insult to personal liberty and to the Hippocratic oath to support that this should be taken away simply because of some medicolegal bs.


I appreciate your perspective. May I contact you? You are invited to send me an email, my Gmail username is the same as my HN username.


Or you'll need a prescription to be able to ask it health questions.


I think this would apply if they sought out government regulation that applies to all AI players, not just their own company.


...or they want to be the first to do it, as others don't have it.

OpenAI is the leading company, if they provide an instance you can use for legal advice, with relative certification etc., it'd be better to trust them rather than another random player. They create the software, the certification and the need for it.


Suppose I am the CTO at a big legal firm, and I have been charged with finding an AI service to integrate with. What could an OpenAI sales person tell me that justifies the value of this certification?

Lets suppose they've already told me that their company is a "leader in the AI space", to which I responded that the guy at Claude told me the same thing on Monday, with which I was equally unimpressed.


I guess right now the way to convince is with SLA, bonus, benefits, "always on support", "a dedicated technical manager to help you design a scalable agentic framework" or whatever BS they can sell.

Or: "your tokens will cost 1/3, because we have crafted a particular gpt instance that is superoptimised for your use case only, look we just created one for you before the meeting".

Money is the biggest motivator. A company that is throwing $$$ left and right needs EE yesterday.

A lot like AWS credits. "Nice I can use up to 200'000$ for free".........




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: