Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Taking away tools don't seem to me like the best response same way taking away things tends never to be. If the problem is people not using it right, that seems to me like it would be designed wrong for what people need it for. Like if the issue is using it wrong with too little sentences, then put a minimum sentence or something to have that minimum likelihood.

Same goes for representing what it means. If people don't understand statistics or math and such, then show what it means with circles or coins or stuff like that. Point is don't seem ever a good thing for options to get removed, especially if it's for bein cynical and judgin people like they're beneath deservin it. Don't make no sense.



The problem isn't people not using it right, the problem is that the tool can never work and just by being out in the world it would cause harm.

If I have a tool that returns a random number between 0 and 1, indicating confidence that text is AI generated, is that tool good? Is it ethical to release it? I'd say no, it isn't. Removing the option is far better because the tool itself is harmful.


>just by being out in the world it would cause harm

sounds like AI rather than AI detection to me. :)


I don't agree with that premise. I don't know that it can't work, that'd suggest something like no matter what it's worse than a coin flip. I don't think it's that bad or at least nobody showed me anything of it being that bad. You'd have to show me that it can't work and that seems to me a pretty big ask I know


All that has to be shown is that the tool is as bad as or worse than random today, in order to remove it today.


From the article, "while incorrectly labeling the human-written text as AI-written 9% of the time."

Seems like from what the article we're talkin about says it definitely ain't worse than random by far. Thing you most want to avoid is wrongly labeling humans as AI-written so that seems pretty good. Though it only identified 26% of AI text as "likely AI-written" that's still better than nothing, and better than random. But we don't know or I don't know from the article if that's on the problem cases of less than 1,000 characters or not. It don't say what the *best case* is just what the general cases are.

Anyhow don't seem to me worse than random is the issue here


You're right, I should have been less specific. If the harm of false positives is significant you may not need to have random or worse than random results to feel obligated to stop the project.


alright. thanks for your thoughts


I'd want to see a lot better than "better than random" for the type of tool which is already being used to discipline students for academic misconduct, making hiring and firing decisions over who used AI in what CV/job tasks, and generally used to check if someone decieved others by passing off ai writing as their own, a wrong result can impugn people's reputations


Wherever you draw the line someone's going to be upset at where the line is. You're echoing the other guy's concern, really everyone's concern. Same issue with everything from criminal justice to government all around so there's not really any value in yelling personal preferences at one another, even assumin I disagree which I don't. That ain't what I'm about in either case and it don't change what I said about removing options by assuming people suck being a bad way to go about doing anything.

Might as well remove all comment sections because people suck so assume there's no value having one. Pick any number of things like that. Just ain't a good way to go thinking about anything let alone defending a company for removing it, since the same logic justifies removing your ability to criticize or defend it in the first place. You an AI expert? Assume no, so why we let you talk about it? Or me? People suck so why let you comment? On and on like that.


There are numerous people that I’ve tried to get them comprehend statistics, important medical statistics for doctors so you would assume they’re smart enough to understand. There just seems to be a sufficient subset of the population that are blind to statistics and nothing can be done about it. Even sitting down and carefully going through the math with them doesn’t work. No matter how deep into visualization rabbit hole you go there will still be a subset that will not get it.


Alright let's say that's how it is. How happy would everyone else be if they were treated like that even if they weren't like that? I'd be right miffed and I ain't no einstein. My problem is saying it's a good thing to *remove* options just because some people don't know how to use it. Use that kinda logic for other stuff and you'd paint yourself in a corner with a very angry hornet trapped in it, so not the kind of thing you want to encourage if you assume you'd end up the one trapped. I don't know if my message is comin across right do you get me?


What about the patients getting unnecessary treatments? How upset should they be? What about the student expelled for AI plagiarism due to a false reading? These things are unreliable, and despite an infinite amount of caveats there is no way to prevent people from over relying on it. We might as well dunk people in the water to see if they float.

That’s a weird kind of extortion, a demand that we placate a subset of the population to the detriment of others. If a conflict came down to people who understand stats versus those blind to it I would put my money on those who understand stats.


I don't see how that's any different from anything, any tool, any power, any method. Same problem with everything. That's why this don't convince me and just seems like removing things cynically instead of improving it. Seems to me like the company also really don't want its service identified negatively like that and get itself associated with cheaters even if they're the ones selling the cheat identifying, or something like that.


Firstly, this tool cannot be made better than it is due to the nature of its construction, it is completely intrinsic. Secondly, as LLM models improve, as they are guaranteed to do, this tool can only become worse as it becomes increasingly difficult to distinguish between human and AI written text.


I don't know about neither of those. How is it intrinsic? What stops detection improving just because AI gets better? Assuming it just doesn't become sentient human replica or something I mean AI like this where it's just a language model thing. Plus that's assuming future stuff you can track in the meanwhile and still don't justify "remove it because people dumb and do bad stuff with tool", that'd only justify removing it later as they do get better.


The algorithms are trained on minimizing the difference between what the algorithm produces and what a human produces. The better the algorithms the less the difference. The algorithms are at the point where there is very little difference and it won’t be long until there is no difference.


I think it will be increasingly irrelevant what specific process generated a text, for example. Already before genAI people did not in general query into how politicians' speeches were crafted etc.


Indeed or whether math was done in your head, on a calculator or by a computer. Math is math and the agent that represents the result gets the credit and blame.


cool beans. I didn't think about it like that. Could be.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: