Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Firstly, this tool cannot be made better than it is due to the nature of its construction, it is completely intrinsic. Secondly, as LLM models improve, as they are guaranteed to do, this tool can only become worse as it becomes increasingly difficult to distinguish between human and AI written text.


I don't know about neither of those. How is it intrinsic? What stops detection improving just because AI gets better? Assuming it just doesn't become sentient human replica or something I mean AI like this where it's just a language model thing. Plus that's assuming future stuff you can track in the meanwhile and still don't justify "remove it because people dumb and do bad stuff with tool", that'd only justify removing it later as they do get better.


The algorithms are trained on minimizing the difference between what the algorithm produces and what a human produces. The better the algorithms the less the difference. The algorithms are at the point where there is very little difference and it won’t be long until there is no difference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: