Oxycontin certainly worked, and the markets demanded more and more of it. Who are we to take a moral stand and limit everyone's access to opiates? We should just focus on making a profit since we're filling a "need"
Guess you mmissed the post where lawyers were submitting legal documents generated by LLM's. Or people taking medical advice and ending up with hyperbromium consumptions. Or the lawsuits around LLM's softly encouraging suicide. Or the general AI psychosis being studied.
Besides the suicide one, I don't know of any examples where that has actually killed someone. Someone could search on Google just the same and ignore their symptoms.
>I don't know of any examples where that has actually killed someone.
You don't see how botched law case can't cost someone their life? Let's not wait until more die to reign this in.
>Someone could search on Google just the same and ignore their symptoms.
Yes, and it's not uncommon for websites or search engines to be sued. Millenia of laws exist for this exact purpose, so companies can't deflect bad things back to the people.
If you want the benefits, you accept the consequences. Especially when you fail to put up guard rails.
That argument is rather naive, given that millenia of law is meant to regulate and disincentivize behavior. "If people didn't get mad they wouldn't murder!"
We've regulated public messages for decades, and for good reason. I'm not absolving them this time because they want to hide behind a chatbot. They have blood on their hands.
If you were offended by that comment, I apologize. You're 99.99% not the problem and infighting gets us nowhre.
But you may indeed be vying against your best interests. Hope you can take some time to understand where you lie in life and if your society is really benefiting you.
I am not offended. And I'll be the one to judge my own best interests. (back to: "personal responsibility"). e.g. I have more information about my own life than you or anyone else, and so am best situated to make decisions for myself about my own beliefs.
For instance I work for one of the companies that produces some of the most popular LLMs in use today. And I certainly have a stake in them performing well and being useful.
But your line of reasoning would have us believe that Henry Ford is a mass murderer due to the number of vehicular deaths each year, or that the wright brothers bear some responsibility for 9/11. They should have foreseen that people would fly their planes into buildings, of course.
If you want to blame someone for LLMs hurting people, we really need to go all the way back to Alan Turing -- without him these people would still be alive!
>And I'll be the one to judge my own best interests thank you.
Okay, cool. Note that I never asked for your opinion and you decided to pop up in this chain as I was talking to someone else. Go about your day or be curious, but don't butt in then pretend 'well I don't care what you say' when you get a response back.
Nothing you said contradicted my main point. So this isn't really a conversation but simply more useless defense. Good day.
Not yet maybe... Once we factor in the environmental damage that generative AI, and all the data centers being built to power it, will inevitably cause - I think it will become increasingly difficult to make the assertion you just did.
You're entering a bridge and there's a road sign before it with a pictogram of a truck and a plaque below that reads "10t max".
According to the logic of your argument, it's perfectly okay to drive a 360t BelAZ 75710 loaded to its full 450t capacity over that bridge just because it's a truck too.