Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI in its present form is probably the strangest and the most paradoxical tech ever invented.

These things are clearly useful once you know where they excel and where they will likely complicate things for you. And even then, there's a lot of trial and error involved and that's due to the non-deterministic nature of these systems.

On the one hand it's impressive that I can spawn a task in Claude's app "what are my options for a flight from X to Y [+ a bunch of additional requirements]" while doing groceries, then receive a pretty good answer.

Isn't it magic? (if you forget about the necessity of adding "keep it short" all the time). Pretty much a personal assistant without the ability of performing actions on my behalf, like booking tickets - a bit too early for that.

Then there's coding. My Copilot has helped me dive into a gigantic pre-existing project in an unfamiliar programming language pretty fast and yet I have to correct and babysit it all the time by intuition. Did it save me time? Probably, but I'm not 100% sure!

The paradoxicality is in that there's probably no going back from AI where it already kind of works for us individually or at org levels, but most of us don't seem to be fully satisfied with it.

The article here pretty much confirms the paradox of AI: yes, orgs implement it, can't go back from it and yet can't reduce the headcount either.

My prediction at the moment is that AI is indeed a bubble but we will probably go through a series of micro-bursts instead of one gigantic burst. AI is here to stay almost like a drug that we will be willing to pay for without seeing clear quantifiable benefits.



It’s a result of the lack of rigor in how it’s being used. Machine learning has been useful for years despite less than 100% accuracy, and the way you trust it is through measurement. Most people using or developing with AI today have punted on that because it’s hard or time consuming. Even people who hold titles of machine learning engineer seem to have forgotten.

We will eventually reach a point where people are teaching each other how to perform evaluation. And then we’ll probably realize that it was being avoided because it’s expense to even get to the point where you can take a measurement and perhaps you didn’t want to know the answer.


I feel like one benefit of humans is you can find someone you can truly trust under almost all circumstances and delegate to them.

With AI you have a thing you can't quite trust under any circumstance even if it's pretty good at everything.


Like a proverbial broken clock which shows correct time twice per 24 hours, AI may "show correct time" for 99% of prompts, but doesn't deserve any more trust.


A hammer doesn't always work as desired, it depends on your skills plus some random failures. When it works however, you can see the result and are satisfied with it - congratulations, you saved some time by not using a rock for the same task.


I can trust a hammer will be a hammer, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: