This feels more like Windows Vista when everybody wanted to switch it off but ultimately switched back to Windows XP until Windows 7 was launched. The liquid glass confuses a lot of folks and it's good when done minimally and not all over the OS.
I wonder if some of this can be solved by removing some wrongly setup context in LLM. Or get a short summary, restructure it and againt feed to a fresh LLM context.
I suspect that context can’t fully replace a mental model, because context is in-band, in the same band, as all input the LLM receives. It’s all just a linear token sequence that is taken in uniformly. There’s too little structure, and everything is equally subject to being discarded or distorted within the model. Even if parts of that token sequence remains unchanged (a “stable” context) when iterating over input, the input it is surrounded with can have arbitrary downstream effects within the model, making it more unreliable and unstable than mental models are.
Okay I see now. I'm just shooting in the dark here, if there's an ability to generate the next best token based on the trained set of words. Can it be taken a level up, in a meta level to generate a generation? like genetic programming does. Or is that what the chain of thought reasoning models do?
Maybe I need to do more homework on LLMs in general.
So we're much closer to the per year spend US saw during the railroad construction era.
At this rate, I hope we get something useful, public, and reasonably priced infrastructure out of these spending in about 5-8 years just like the railroads.
Was wondering something similar, if OP had blogged it earlier when he found claude was using it and re-posted it in HN/reddit it in a sensational way to capture eyes. Maybe through one of the forums he could have got an introduction and a job doing what he loves.
OP still has a chance now, maybe not anthropic, even other competitors can come knocking.
Fantastical - I came full circle to Google Calendar
I've bought quite a few useful mac and ios apps on one time payment. I'm interested in rsync.net and maybe setup a self hosting with my friends and family.
An interesting take, only if the stakes are low when the decisions are wrong. I'm not confident to have an LLM taking decisions for a customer or me. I'd rather have it suggest things to customers, sugesstive actions and some useful insights that user may have overlooked.
This is happening across lots of industries right now. Some are ok like the car company that had to sell the car at the insanely low price their agent promised but some are terrible like the United healthcare "90% wrong when denying coverage" one or the "who should be fired" prompt from doge.
reply