Hacker Newsnew | past | comments | ask | show | jobs | submit | mpalmer's commentslogin

Your question would seem to be self-answering.

> There really isn't anything special to using AI anyways it's not rocket science

Could have fooled me, the way some people manage to confuse themselves with it


I was already turned off by their decision to remove support for fzf, which I use everywhere else. I'm done.

I’m not sure what you mean here - we never supported fzf, other than a super early prototype in like 2021

This release actually adds support for nucleo, which matches with the same algorithm as fzf and was a common request


About the "ai", the announcement is very vague. Is this incorporating a local model on device, something running on your infrastructure or a third party model like Claude? Because to me nowadays adding AI on anything usually means higher running costs equals sooner or latter enshittification.

Hey, thanks for responding. I guess I used the prototype then. Definitely don't remember anyone saying "this is a prototype" at the time, so I took the product at face value, and part of the reason I chose it was the fzf support.

I'm sure I recall some unhappy GitHub issues about the shift away...

And the algorithm isn't the value prop for me, not by a long shot. fzf's customizability takes the cake. And now the overall product is way too big and feature-ful for me. I want simple, unix-y software that clicks together like Lego.

You should be proud of the project's success for sure, it's just not for me!


As a citizen of one of these backsliders, in addition to voting in every election I'm allowed to, I am committing to taking additional civic action in the next few months to advocate for free and fair elections. I hope everyone in a similar position joins me in that.

In the United States, a significant portion of the population is all for that "backsliding". This isn't something imposed by an unconstitutional autocracy. This was something we voted for.

There is an election coming up, and it's possible that the administration will lose enough support to prevent future erosion of democracy. But even without their active interference, they will still have the support of many, many voters. Current forecasts are too close to call, e.g.:

https://www.270towin.com/2026-house-election/consensus-2026-...


    The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking
Steve Yegge said on some podcast recently that AI is going to have to come up with a more visual medium for communicating, because people don't want to read several paragraphs. He shared this uncritically, seemingly without judgement or disappointment. Yegge himself is a former Googler and by all accounts was an impressive person at one point, now best known as the person who vibe-birthed the inanity that is GasTown.

At work I'm seeing colleagues I once considered formidable completely turning off their brains and letting the bot drive, and wholly missing the mark on work quality. It's like a sickness, like COVID brain fog people don't even notice they have.

I see humans getting worse at reading, worse at writing, and worse at programming by themselves. It makes me angry and sad.

We are getting dumber, people, and I fully believe Altman and friends are lying when they say they want it otherwise.


Correct.

LLM’s are the virus of the mind - people think so what? I get my output and move on.

Yeah.. no. You need that thinking capacity to protect yourself. Once that’s gone en-masse, what’s left of the democratic system (not much) will completely collapse. Congrats to legally creating an environment that yields oligarchy.

Altman and his cronies yearn for a swath of people who cannnot think for themselves.


Then surely "little bit depressing this is still how we do things" is equally unwelcome

you are certainly free to say that under the top-level comment with that quote. or email the mods about it. im not going to stop you.

For my part at least, I get the most riled up against the binary thinkers!

This. A lot of people on HN acts as you can only write code manually (almost, generators and snippets are allowed, because we are used to them) or vibe coding the whole project through a WhatsApp conversation. As if there was nothing in between and the same approach should work for all kinds of projects.

Personally I use coding agents for boring parts (I really don't enjoy putting the same piece of string to 20 different classes just to register a new component) and they work quite well, I'm going to use them for foreseeable future, because they make coding much more enjoyable for me. On the other hand I don't have an OpenClaw box burning billions of tokens weekly for me, because I usually don't have ideas that could be clearly specified.


I do not think "AI coding" - as distinct from the human who drives it - is gambling. More like a delayed footgun for the uneducated. I don't mean that disparagingly, but I do mean it literally.

    I’ve certainly been spending more time coding. But is it because it’s making me more efficient and smarter or is it because I’m just gambling on what I want to see? 
Is this really a difficult question to answer for oneself? If you can't tell if you're learning anything, or getting more confident describing what you want, I would suggest that you cannot be thinking that deeply about the code you're producing.

    Am I just pulling the lever until I reach jackpot?
And even then, will you know you've won?

At the very least, a gambler knows when they have hit jackpot. Here, you start off assuming you've won the jackpot every time, and maybe there'll be an unpleasant surprise down the line. Maybe that's still gambling, but it's pretty backwards.


Amen. "Growth" is literally product cancer

"Asking AI" is doing a lot of work there.

The people who pay for Kagi do so for very specific reasons, often because they know what "asking AI" really means for their privacy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: