I canceled my GPT-4 subscription recently because it just wasn't that impactful for me. I found myself using it less and less, especially for things that matter (because I can't trust the results). The things it's good at: boilerplate text, lightweight trivia, some remixing/brainstorming. Oh and "write me a clone". Yes, it can write clones of things, because it's already seen them. I've spent WAY more time trying to get anything useful out of it for a non-clone project, than it took me when I just buckled down and did it.
Yes, "many things are clones", but that just speaks to how uncreative we are all being. A 2048 clone, seriously? It was a mildly interesting game for about 3 minutes in 2014, and it only took the original author a weekend to build in the first place. Like how was that impactful that you were able to make another one yourself for $4?
> Like how was that impactful that you were able to make another one yourself for $4?
It's been my "concentration ritual", an equivalent of doodling, for a few years in 2010s, so I have a soft spot for it. Tried getting back to it the other day, all my usual web and Android versions went through full enshittification. So that $4 and couple hours bought me a 2048 version that's lightweight, works on my phone, and doesn't surveil or monetize me. Scratched my own itch.
Of course, that's on top of gaining a lot of experience using aider-chat, by setting myself a goal of making a small, feature-complete app in a language I'm only moderately good at (and environment - the modern web - which I both hate and suck at), with extra constraint of not being allowed to write even a single line of code myself. I.e. a thing too boring for me to do, but easy enough to evaluate.
And no, the clone aspect wasn't really that important in this project. I could've asked it for something unique, and I expect it to work more-less the same way. In fact, this is what I'm trying right now, as I just added persistent state to the 2048 game (to work around Firefox Mobile aggressively unloading tabs you're not looking at, incidentally making PWAs mostly unusable) and I have my perfect distraction completely done.
EDIT:
BTW. did I ever tell you about the best voice assistant ever made, which is Home Assistant's voice assistant integrated with GPT-4o? I have a near-Star Trek experience at my home right now, being able to operate climate control and creature comforts by talking completely casually to my watch.
Try asking it something actually technologically hard or novel and see what answers you get.
In my experience, it repeatedly bails out with "this is hard and requires a lot of careful planning" regardless of how much I try to "convince" the model to live the life of a distributed systems engineering expert. Sure, it spits out some sample/toy code... that often doesn't/compile or has obvious flaws in it.
Very few people are working on technologically hard or novel things. Those people have always had very special effects on society, and will continue to be special going forward - LLM's aren't going to prevent those people from delivering real value. HN has an absurdly rich concentration of these special people, which is why I like it. And many of them are surrounded by other similarly special people in real life. I only regularly talk with maybe 1-2 people in real life who are even close to that type of special. Even when I was a chemical/electrical/petroleum engineer working closely with other engineers - usually only 1-2 people at each workplace were doing really smart work.
That said, the majority of my friends are doing relatively manual work (technician, restaurants, event gigs, sex work) and are neither threatened by LLMs nor find much use for them.
Even though I do think that almost any profession can potentially find use for LLMs in some shape or form. My opinion is LLMs can increase productivity and be a net positive the way the Internet/search engines are if used correctly.
To expand on my original comment: All that being said, I think the hype/media cycle overestimates the magnitude of the potential positive LLM effect. You’ll see numbers like 5x, 10x, 100x increase in productivity thrown around. If I have to bet, I would say the likely increase is going to be in the 1x-1.5x range but not much greater.
Most things in the world are not infinitely exponential, even if they initially seem to be.
Appreciated! But if we fret about a few downvotes, we're using the forum wrong. Some unpopular views need to be discussed - either because they hold some valuable truth that people are ignorant of, or because discussing them can shine a light on why the unpopular views are misguided. I suspect the downvotes are related to "Very few people are working on technologically hard or novel things" -- many HN users have been surrounded since elementary school by tons of people who do currently work on hard or novel problems, so they understandably think that >5-10% of people do that, when in fact it's closer to maybe 1-in-200. I've been part of social groups who went to high schools with absurd numbers of Rhodes' Scholars and peer groups where everyone in the group can trivially get through medical schools with top marks, receive faculty positions as professors at top-3 universities, found incredible startups through insane technical competence, and still all think they're stupid because they compare themselves to the true 1-in-a-million geniuses they grew up with who are doing research so advanced that it's far beyond their most remote chances of ever having even surface-level comprehension of that research. Their extended social group likely comprises >1% of all Americans working on "hard or novel problems", but since 75% of them are doing it, they have no idea that the real base rate is closer to 1-in-200, generously. They grossly underestimate their relative intelligence vs. the median and grossly overestimate the ability of average people (and explain away differences in outcome to personality issues like "laziness").
There are a surprising number of people from these peer groups on HN. These are the people who will never be threatened by LLM's -- they are capable of adapting to use any new tools and transcending any future paradigm, save war/disease/famine.
> To expand on my original comment: All that being said, I think the hype/media cycle overestimates the magnitude of the potential positive LLM effect. You’ll see numbers like 5x, 10x, 100x increase in productivity thrown around. If I have to bet, I would say the likely increase is going to be in the 1x-1.5x range but not much greater. Most things in the world are not infinitely exponential, even if they initially seem to be.
Yours is a very reasonable take that I wouldn't argue against. I also think it's reasonable that some people think it will be 5x-100x -- for the work some individuals are familiar with it very well might be already, or they might be more bullish on future advances in reinforcement learning / goal-seeking / "search" (iterative re-search to yield deep solutions).
> Even though I do think that almost any profession can potentially find use for LLMs in some shape or form.
I reactively feel this is stretching it for people who travel around just to load/unload boxes of equipment at events/concerts/etc. But the way you worded this is definitely not wrong - even manual laborers may find LLM's useful for determining whether they, their peers, and their bosses are following proper safety/health/HR regulations. More obviously, Sex workers will absolutely be using LLM's to screen potential customers for best mutual-fit and maintain engagement (as with lawyers who own small practices, a large number of non-billable hours goes towards client acquisition, as well as retention). LLM's are not "there" yet for transparent personalized client engagement which maintains the personality of the provider, but likely will be soon with some clever UX and RAG.
> I reactively feel this is stretching it for people who travel around just to load/unload boxes of equipment at events/concerts/etc. But the way you worded this is definitely not wrong - even manual laborers may find LLM's useful for determining whether they, their peers, and their bosses are following proper safety/health/HR regulations.
It's more than that when the LLMs go mutli-modal. A model (or an ensemble) that can hear you talking and see what you're showing it suddenly becomes very useful even for manual labor. Instructions, inspections, situational awareness, to think of few cases; this is a thoroughly unexplored space so far.
Yes, "many things are clones", but that just speaks to how uncreative we are all being. A 2048 clone, seriously? It was a mildly interesting game for about 3 minutes in 2014, and it only took the original author a weekend to build in the first place. Like how was that impactful that you were able to make another one yourself for $4?