Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLM are replacing Google for me when coding. When I want to get something implemented, let's say make a REST request in Java using a specific client library, I previously used Google to find example of using that library.

Google has gotten worse (or the internet has more garbage) so finding code an example is more difficult than it used to be. Now I ask an LLM for an example. Sometimes I have to ask for a refinement and and usually something is broken in the example but it takes less time to get the LLM produced example to work than it does to find a functional example using Google.

But the LLM has only replaced my previous Google usage, I didn't expect Google to develop my applications and I don't with LLMs.



This has been my experience of successful usage as well. It's not writing code for me, but pulling together the equivalent of a Stack Overflow example and some explaining sentences that I can follow up on. Not perfect and I don't blindly copy paste it, same as Stack Overflow ever was, but faster and more interactive. It's helpful for wayfinding, but not producing the end result.


I used the Kagi free trial when I was doing Advent of Code in a somewhat unfamiliar language (Swift) last year, as well as ChatGPT occasionally.

The LLM was obviously much faster and the information was much higher density, but it had quite literally about a 20% rate of just making up APIs from my limited experiment. But I was very impressed with Kagi’s results and ended up signing up, now using it as my primary search engine.


It is really a double edged sword. Some APIs I would not have found myself. In some way an AI works like my mind fuzzy associating memory fragements: There should be an option for this command to do X because similar commands have this option and it would be possible to provide this option. But in reality the library is less than perfectly engineered and the option is not there. The AI also guesses the option is there. But I do not need a guess when I ask the AI - I need reliable facts. If the cost of an error is not high I still ask the AI and if it fails it is back to RTFM but if the cost of failure is high then everything that comes out of a LLM needs checking.


I did the Kagi trial in the fall of 2023 and tried to hobble along with the cheapest tier.

Then I got hooked by having a search engine that actually finds the stuff I need and I've been a subscriber for bit over a year now.

Wouldn't go back to Google lightly.


In order to use a library, I need to (this is my opinion) be able to reason about the library’s behavior, based on a specification of its interface contract. The LLM may help with coming up with suitable code, but verifying that the application logic is correct with respect to the library’s documented interface contract is still necessary. It’s therefore still a requirement to read and understand the library’s documentation. For example, for the case of a REST client, you need to understand how the possible failure modes of the HTTP protocol and REST API are translated by the library.


I wonder how good Google could be if they had a charge per query model that these LLMs do. AI or not, dropping the ad incentive would be nice.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: