The vast majority of humans across the planet aren’t making their money with their computer, which was the qualifier in the first line of my comment.
Furthermore, even if they did, the vast majority of them still won’t be using their computer to generate revenue - they’ll be using an employer-provided one and the things I’m talking about have nothing to do with them.
They still aren't learning. You're learning and then telling them to incorporate your learnings. They aren't able to remember this so you need to remind them each day.
That sounds a lot like '50 First Dates' but for programming.
Yes, this is something people using LLMs for coding probably pick up on the first day. They're not "learning" as humans do obviously. Instead, the process is that you figure out what was missing from the first message you sent where they got something wrong, change it, and then restart from beginning. The "learning" is you keeping track of what you need to include in the context, how that process exactly works, is up to you. For some it's very automatic, and you don't add/remove things yourself, for others is keeping a text file around they copy-paste into a chat UI.
This is what people mean when they say "you can kind of do "learning" (not literally) for LLMs"
While I hate anthropomorphizing agents, there is an important practical difference between a human with no memory, and an agent with no memory but the ability to ingest hundreds of pages of documentation nearly instantly.
The outcome is definitely not the same, and you need to remind them all the time. Even if you feed the context automatically they will happily "forget" it from time to time. And you need to update that automated context again, and again, and again, as the project evolves
I believe LLMs ultimately cannot learn new ideas from their input in the same way as they can learn it from their training data, as the input data doesn't affect the weights of the neural network layers.
For example, let's say LLMs did not have examples of chess gameplay examples in their training data. Would one be able to have an LLM play chess by listing the rules and examples in the context? Perhaps, to some extent, but I believe it would be much worse than if it was part of the training (which of course isn't great either).
Agreed. The feature set is in desperate need of the search option both on approved photos and when attempting to approve additional photos. Very often I have to go into the photos app, find the photo, make a mental record of approximately where it is in history and scroll scroll scroll. Obnoxious and cumbersome.
What I really want is to create a special photo album for (Facebook/Instagram/Slack/etc.) and have it automatically gain access to whatever photos I put in there.
Depending on salary, 2 magnitudes at $5k is $500k.
That amount of money for the vast majority of humans across the planet is unfathomable.
No one is worried about if the top 5% can afford DRAM. Literally zero people.
reply