Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

how do you tolerate the sheer latency of running the "vast majority" of your web searches through an llm


How many searches previously to find the right question to ask x search time = total_search_time

# of searches is lower, total-search_time drops


For me ChatGPT is great when I don’t really know what I don’t know. I still end up having to do a google search after to verify that the AI result isn’t insane. So for me ChatGPT often is just adding an extra step.


The LLM can read through the results quicker than you can and provide the information you were looking for.


Well, it provides something at any rate. Whether or not it's the information you were looking for is very much a matter of luck.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: