Hacker Newsnew | past | comments | ask | show | jobs | submit | Onewildgamer's commentslogin

Google AI mode constantly does mistakes and I go back to chatgpt even when I don't like it.

We're going back full circle. From renting/buying DVDs, Bluray to Online streaming to renting content again.


This feels more like Windows Vista when everybody wanted to switch it off but ultimately switched back to Windows XP until Windows 7 was launched. The liquid glass confuses a lot of folks and it's good when done minimally and not all over the OS.


Java was thriving during the golden age of Eclipse Foundation and IDE. JetBrains is very much recent.


> JetBrains is very much recent.

JetBrains is 25 years old, almost as old as Java.


IntelliJ use wasn't that widespread until about 10-15 years ago. Java was thriving before that.


It was in heavy use in London investment banks in 2005. Even resharper was commonplace by the following year.


Around 10 years ago Eclipse was still the primary editor in the circles I was in.


Still is on my circles, and at home I have been always a Netbeans fan.

I am an IDE guy since Borland products for MS-DOS, yet I was never sold on InteliJ anyway, and Android Studio made me dislike it even further.


I was using Eclipse back then, but indeed Wikipedia says IDEA 1.0 (Jan 2001) predates Eclipse IDE (Nov 2001).

NetBeans was bought by Sun in 1999 and opensourced on Jun 2000.


I wonder if some of this can be solved by removing some wrongly setup context in LLM. Or get a short summary, restructure it and againt feed to a fresh LLM context.


I suspect that context can’t fully replace a mental model, because context is in-band, in the same band, as all input the LLM receives. It’s all just a linear token sequence that is taken in uniformly. There’s too little structure, and everything is equally subject to being discarded or distorted within the model. Even if parts of that token sequence remains unchanged (a “stable” context) when iterating over input, the input it is surrounded with can have arbitrary downstream effects within the model, making it more unreliable and unstable than mental models are.


Okay I see now. I'm just shooting in the dark here, if there's an ability to generate the next best token based on the trained set of words. Can it be taken a level up, in a meta level to generate a generation? like genetic programming does. Or is that what the chain of thought reasoning models do?

Maybe I need to do more homework on LLMs in general.


So we're much closer to the per year spend US saw during the railroad construction era.

At this rate, I hope we get something useful, public, and reasonably priced infrastructure out of these spending in about 5-8 years just like the railroads.


Technically, it Earth's rotation gives us day and night. Doesn't move the calendar, which is through Earth's orbital revolution


Exactly the GP's point (the rotation of the Earth "made their day").


Was wondering something similar, if OP had blogged it earlier when he found claude was using it and re-posted it in HN/reddit it in a sensational way to capture eyes. Maybe through one of the forums he could have got an introduction and a job doing what he loves.

OP still has a chance now, maybe not anthropic, even other competitors can come knocking.


Claude Pro (Switched from Gemini Pro)

Cloudflare domain, compute, db

Apple 50GB Storage

Google One Premium Family (Storage only, not AI)

Youtube Premium

PS Plus

Cancelled:

Spotify (youtube music is better for my needs)

Google AI

Fantastical - I came full circle to Google Calendar

I've bought quite a few useful mac and ios apps on one time payment. I'm interested in rsync.net and maybe setup a self hosting with my friends and family.


An interesting take, only if the stakes are low when the decisions are wrong. I'm not confident to have an LLM taking decisions for a customer or me. I'd rather have it suggest things to customers, sugesstive actions and some useful insights that user may have overlooked.


Can you imagine a bank taking this approach? Sorry, we didn't have enough time to build a true ledger, and now the AI says you have no money.


This is happening across lots of industries right now. Some are ok like the car company that had to sell the car at the insanely low price their agent promised but some are terrible like the United healthcare "90% wrong when denying coverage" one or the "who should be fired" prompt from doge.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: