It’s the best demo-able software today that lets the customers’ minds go wild with the possibilities.
But then the nuances of human abilities in commercial settings, like call center, become realities and obstacles that the software can’t surmount. Costs rise, productivity not so much, and then legal gets involved and it’s even more costly.
This is the best description of the current state of affairs. But who is going to risk not invest on what could be another possible major breakthrough in 3 to 5 years? Either from a GPT5 or another source? It's not like all the brainpower that left OpenAI recently, suddenly converted to farming...
Plus there's the reality that all of this mediocre capability requires VAST inputs of energy that are totally unsustainable.
I've said it here before and will again, AI is just the pivot from crypto that props up NVIDIA prices and lets tech bros speculate on a market that simply will never materialize.
As somebody who recently used Claude Opus to help quickly and affordably solve complex problems in 3D design work (OpenSCAD) while designing a marketable product, I have my doubts about your analogy. Prior to the availability of affordable and accessible AI, I could have taken some Bitcoin and paid somebody smarter than me to help me with this work, but I guarantee it would have been less economically viable, both in terms of financial cost to me and in terms of energy consumption costs.
If you need it to do small standard tasks that normally requires an expert, then these AI can often solve them for you.
If however you need an expert to do a longer term project, then these AI can not solve them for you. It does replace some jobs as you say, but most people don't work on short term project consultation so overall it doesn't matter much.
> and in terms of energy consumption costs.
Did you include the cost of training the model? I'm pretty sure the total investments that goes into AI today way outspend the cost of such small consultation gigs that AI can replace currently. Customer support is the biggest moneymaker so far and even that is hard to do right with customers suing over lies from the AI bots.
As expected, the replies to this comment are about the alleged speculated or anecdotal utility of "crypto" or "AI" to solve problems. However the replies fail to address the substance of the comment: money, the personal enrichment of a selected group of individuals. The motivation behind these "innovations" is money, not problem-solving. As the comment indicates, "AI" actually _creates_ (not solves) _real_ (not speculative) problems for which its proponents have no solutions. Both "crypto" and "AI" seem to evade cost versus benefit analysis because the costs are real while benefits are pure speculation. As their proponents enrich themselves on what amount to pre-payments for something that may never be delivered, the public is expected to believe that the benefits will arrive "real soon" and will convincingly exceed the enormous costs society is now paying and will continue to pay.
AI is legit, but you are right that currently it's too expensive and being propped up by investments paid for in compute from cloud providers. We're making rapid progress in bringing the compute requirements of high level models down though, and eventually we'll get over this need to make supermodels and start making distilled models for specific tasks, which will provide the benefits without the drawbacks of all knowing models.
Maybe AI is "losing steam" but this is an opinion column masquerading as news, with one or two quotes (from e.g. noted AI skeptic Gary Marcus) or anecdotes supporting each section.
It would be equally possible to collect a series of similar but opposite data points to assert that AI is in fact gaining steam.
Of course it's an opinion column. But how is it masquerading as news? All opinion columns have a central thesis that is supported by facts. This is a thesis about the current state and future of AI. Like any opinion piece, the author chooses facts that support the thesis.
Sure. I just think one should interrogate and really understand the data points being used to support this claim. Let's see how they look when presented as bullet points:
- Nvidia's Revenue and AI Spending: Sequoia says "the industry spent $50 billion on chips from Nvidia to train AI in 2023, but brought in only $3 billion in revenue." - This comes from some Sequoia presentation which it appears was originally cited in an earlier WSJ article and then has been repeated everywhere. It would be nice to see that presentation and the context of this data in that presentation. And yes, this nascent industry in essentially its first year of commercialization brought in less than was invested in anticipation of future growth
- Synthetic Data for Training: "To train next generation AIs, engineers are turning to 'synthetic data,' which is data generated by other AIs. That approach didn’t work to create better self-driving technology for vehicles, and there is plenty of evidence it will be no better for large language models," says Gary Marcus, a cognitive scientist. aka Gary Marcus a noted AI skeptic
- Incremental Gains in AI Models: "AIs like ChatGPT rapidly got better in their early days, but what we’ve seen in the past 14-and-a-half months are only incremental gains," says Marcus. "The truth is, the core capabilities of these systems have either reached a plateau, or at least have slowed down in their improvement." aka Gary Marcus a noted AI skeptic
- Convergence in AI Model Performance: "Further evidence of the slowdown in improvement of AIs can be found in research showing that the gaps between the performance of various AI models are closing. All of the best proprietary AI models are converging on about the same scores on tests of their abilities, and even free, open-source models, like those from Meta and Mistral, are catching up." No citation provided for this "research".
- Commoditization: "A mature technology is one where everyone knows how to build it. Absent profound breakthroughs—which become exceedingly rare—no one has an edge in performance." A broad generalization.
- AI Startups Facing Turmoil: "Some AI startups have already run into turmoil, including Inflection AI—its co-founder and other employees decamped for Microsoft in March. The CEO of Stability AI, which built the popular image-generation AI tool Stable Diffusion, left abruptly in March. Many other AI startups, even well-funded ones, are apparently in talks to sell themselves." People at a couple of start-ups are moving around. Unsourced general claim that unnamed AI startups are looking to sell themselves (is this actually bad news?)
- High Operational Costs: "The bottom line is that for a popular service that relies on generative AI, the costs of running it far exceed the already eye-watering cost of training it... analysts believe delivering AI answers on those searches will eat into the company’s margins." Unsourced "analysts". Would be interesting to see the context of this discussion but also it is not unusual for investment in a new wave of growth to eat into margins initially
- Survey Data on AI Use: "A recent survey conducted by Microsoft and LinkedIn found that three in four white-collar workers now use AI at work. Another survey, from corporate expense-management and tracking company Ramp, shows about a third of companies pay for at least one AI tool, up from 21% a year ago.
This suggests there is a massive gulf between the number of workers who are just playing with AI, and the subset who rely on it and pay for it." Two cherry-picked surveys conducted for marketing purposes jammed together to make an unrelated claim.
- Limited Revenue Growth: "OpenAI doesn’t disclose its annual revenue, but the Financial Times reported in December that it was at least $2 billion, and that the company thought it could double that amount by 2025.
That is still a far cry from the revenue needed to justify OpenAI’s now nearly $90 billion valuation." It is completely normal for the leading edge company showing massive growth in a nascent field to have a huge valuation. It doesn't always work out well for that company but this is expected whether the company is ultimately a success or not and the ability to tap that valuation improves the likelihood of success
- Productivity and Job Replacement: "Evidence suggests AI isn’t nearly the productivity booster it has been touted as, says Peter Cappelli, a professor of management at the University of Pennsylvania’s Wharton School. While these systems can help some people do their jobs, they can’t actually replace them." Non-specific "evidence" is cited here.
- Challenges in AI Usage: "AIs still make up fake information, which means they require someone knowledgeable to use them. Also, getting the most out of open-ended chatbots isn’t intuitive, and workers will need significant training and time to adjust." Author assertion
- Historical Patterns in Technology Adoption: "Changing people’s mindsets and habits will be among the biggest barriers to swift adoption of AI. That is a remarkably consistent pattern across the rollout of all new technologies." Author assertion
Technology revolution means different thing to different people
- technological change, consumer benefit
- investing opportunity.
If you compare to internet revolution consumer benefit increased steadily, but as a general investment category it was total bust. During the internet revolution, Top 5 internet companies were Yahoo.com, AOL.com, Geocities.com, MSN.com, and Lycos.com. It took almost 10 years to Amazon stock to recover from 1999 average price longer to become a good investment. Cisco was the Nvidia of late 90s.
How about personal computer revolution? Tandy, Commodore, Atari, Apple, Acorn, Sinclair, IBM, ...
I can do basic back-on-the-envelope math as a retail investor. I pick investment forecasts for the global AI impact from a13z, Goldman Sachs, Nvidia, for the next 5 to 10 years. Then I do basic cash flow discounting.
How the fuck these valuations make sense?
At some time in next 2-4 years there can be a crash and "dark fiber" equivalent for compute. Tensor compute prices crater and are dominated by energy prices. That's when the high impact innovations emerge to exploit all that available compute.
Finance world looks at the consumer benefit as an interesting feature of the investment opportunity, while the technology world looks at the investment opportunity as an interesting feature of the technology change and consumer benefit.
The two communities will talk right past each other like they’re speaking different languages.
>Finance world looks at the consumer benefit as an interesting feature of the investment opportunity, while the technology world looks at the investment opportunity as an interesting feature of the technology change and consumer benefit.
This article deals with the ginormous models being trained and run in places like meta or openai - but the real AI revolution is a mix of things and we've only barely scraped the surface of what we could do with this technology.
I say this not because the next ChatGPT will be incredible (which it might be) but because it's actually the open source community which amazes me with what has already been built, and runs locally on consumer hardware.
Criticism on the massive cloud models which are expensive to run - is not wrong, for now. But it also used to take an entire room with thousands and thousands of costly vacuum tubes to be able to calculate 2+2.
My point is hardware will get better, more efficient and more suitable for AI superficially, just as software will improve and optimize.
I bet we'll be able to run bigger and bigger models for lower costs in the long term - but even if running SOA models will remain a premium, the local models still provide incredible value and that's where a lot of the revolution will (and already does) come from.
I think you're overestimating what the open source community has done.
The core value of GenAI (the models) that is currently open sourced has ONLY been contributed by large companies.
What the open source *community* has contributed is frontends and implementations of research papers. It is hugely important of course, and not to be ignored, but GenAI simply doesn't exist at all without large companies.
Not true, open source is also doing a lot of fine tunes/post training on Llama3, and there's OpenChat as well, so if Llama didn't exist we'd still have something to work with.
Not exactly. The "web uis" are actually products built around these models. There are some truly outstanding examples like comfy, automatic1111 and ollama. But what's been even more impressive to me is how the open source community use these models to create new workflows like texturing in blender. There is just so much to discover yet.
When my company makes 10% more profit from my work, my pay should increase 10% for it to be fair. This does of course not happen, which is my point. When the company makes 10% less, though, pay cuts are not unusual.
Why should it work that way though? Initial hiring salaries are generally based largely on the market rate for talent, not the expected profit creates by the employee. If they are expected to share 10% of all profits with existing employees wouldn't that create incentives for them to fire existing employees and rehire at market rates, effectively avoiding the profit sharing scheme?
In theory it does sound great for the employee, I wouldn't argue with that at all. But unless profit sharing is part of the compensation agreement up front it seems unreasonable to expect.
That said, some companies absolutely do offer bonuses tied to profit targets. They're one-time bonuses so employee pay isn't adjusted up or down every single year to follow profits, they just get some cash incentive to share some of the profits with employees.
Basically, they have a much more loose attitude towards IP. This means that you might not be able to make ALL THE MONEY when you invent something (or, when one of your underpaid employees invents something), but society as a whole benefits.
The system balances itself, as if you make it too unprofitable to invent things, then nobody will do it, but it stops this avalanche effect where only massive companies can make money because they have disproportionately many resources to spend on both R&D and IP defense.
The worst enemy of AI is the current industry climate. Tech has become a speculation-driven sector.
15 to 20 years ago what we call now "AI" would've been the logical follow up to automation technologies like Apple Automator, IFTTT, node-based visual interfaces. A solid and slightly groundbreaking advancement step in how we use our devices. It would've been a cool 15-minutes long demo on a WWDC and then we would've moved on with our lives, while using it on common, mundane, real-world problems.
But today every technological progression has to be an earth-shattering revolution, and every company needs to become the next multi-billion unicorn carried by a pseudo-messianic leaderships by the likes of Musk, Altman, Holmes or Bankman-Fried. Otherwise you can completely forget those sweet, sweet funding money.
> My photo editor tool now has a button to automatically remove backgrounds, a cool trick. It's at least more useful than crypto.
Okay but this has nothing to do with the "AI revolution" the article talks about. This is not related to GenAI at all and is something that has existed for close to a decade.
| That's the problem with AI: you can't trust the output.
Look at how quickly after diffusion models released that "looks like AI" became an insult.
An LLM that accelerates the work of a domain expert makes sense to me. A team of post-doc infectious disease researchers using an LLM continuously trained on up-to-date medical research papers and prompted as a sort of digital Watson to their collective Sherlock works because all humans involved are highly trained to filter out the nonsense, the hallucinations, the bullshit. And the LLM doesn't have to be crafted to be obsequious and PR friendly.
But there aren't enough high-level domain experts like that to build a unicorn on. So instead, everyone decided to take these LLMs and push them on the general public. And you might think I'm going to say the general public isn't capable of learning the domain enough to separate the wheat from the chaff, but it's more that they're not willing to. And general purpose startups are building user interfaces where they either expect them to (best case scenario) or pretending they won't have to (worst case and much more common scenario).
Unless that UX problem is solved, there are going to be some very unhappy consumers going forward.
You make an important point that is fortunately becoming common knowledge: interactive use of LLMs requires a domain expert who evaluates results to verify and modify as appropriate. I am an old man so this may not be typical: I feel like I get more coding and research done using GitHub CoPilot and integrated search products like Perplexity, but I get tired more quickly because I feel like I am working harder.
Perhaps a less known truth: it is difficult building applications that use LLMs in complex ways without a human in the inner loop.
Really? We have absolutely great GenAI models for image generation that allow you to make any image you want in any quality you want, and despite this the only use case so far has been porn.
I can't think of any reason to believe that video generation will turn out differently.
To me it seems that AI is not even priced into the market cap of the big tech companies yet.
Since GPT-3 came out in mid 2020:
Alphabet's market cap doubled. But their earnings more than doubled.
Same for Microsoft.
In other words: Their p/e ratio did not increase. The future potential of AI is not yet priced in.
The cost of all human labor is something like $70 trillion per year. Automating 1% of that per year would increase the revenue of tech companies by $700B per year. Let Microsoft capture 10% of that and have 20% margin, that's an increase in profit of $14B per year. Assign a p/e of 30 to that, and its's $420B of additional market cap per year. That would be about 15% growth in market cap per year for Microsoft.
Aside from some nitpicky numeric issues, you're considering the benefit without the cost. If we ever achieve real AI, the sort that could realistically start replacing labor pools with high efficiency, then you'd destroy your own economy. The people that work for you (in a broad enough picture) are the exact same people buying your stuff. Kill your labor pool, kill your demand, kill your economy. Then unemployed people get angry, start torching AI systems, and electing what will be called 'luddites.'
You have to take out the post-pandemic rebound effect before you can talk about the AI effect. From an earnings perspective, no AI has bought a significant amount for the bigtechs.
If a company figured out production ready automation, boots on the ground actually being replaced with prompts on the ground, they would probably trade at a P/E well over 100.
Chamath was talking about this near the end of the All In Podcast released yesterday. He was talking about development and energy cost vs. revenue and even more importantly profit.
My gut feeling is that small large language models and ever decreasing model size and computational resource use on edge devices will lead to great outcomes in a few years, but that is only my hunch.
Right now, I get the most use out of pipelined systems like Perplexity that use an LLM to understand what you want and produce search terms, do the search and set up a temporary RAG type system with search results, identify the most useful retrieved text, and then feed this into another LLM that formats and presents a report that contains links back to source material. I find this very useful, and incidentally makes the Internet “fun” again.
Also right now, my favorite play activity is writing a lot of code (Python, Racket, Common Lisp) to do small tasks using APIs fronting LLMs (mostly running locally with Ollama).
Disregard the occasional atmosphere journalism piece.
Even with only current tech (GPT-4o and less) we still haven't exploited 0.01% of the possibilities to create value. People are still learning how to use it and how to identify and execute on opportunities.
There’s an AI Backlash happening now in the enterprise in particular. Many vendors have promised big results and not delivering, especially in automation. Promising to automate 50% of your contact center with AI, of your interactions with AI, etc. No one is hitting it. The genie isn’t going back in the bottle, customers are committed to the goal. But vendors have way over promised across the board.
> Has been shocking to see YC double down on LLMs so much in the last two batches.
They also doubled down on blockchain sufficiently late in the game that it was obvious to most that there was zero value in all of it.
Either they’re really getting high on their own supply, or there’s some kind of model where you hedge against missing out on a unicorn, even if all indicators point to the fact it’s vaporware.
To me I see a lot of similarities to cloud computing early on (broadly defined). There were a lot of twists and turns in the road and a lot of things didn't tuen out as many predicted--and it was, naturally, much messier than earlier visions. But companies that decided to sit back and see how things played out mostly paid for it while the winners--so far--have mostly been companies that made bets and course-corrected.
I worked in Enterprise Cloud teams during those early days. It was instantly loved because it allowed development teams from across the company to bypass central IT and provision their own infrastructure. And because it was such high quality e.g. secure, highly available it could be trusted for Production workloads.
There is no comparison at all to AI which is fundamentally untrustworthy and may remain so for a while/ever.
It's more the fact that many of those startups are seeing traction because everyone is interested in seeing what AI can offer. If there is a reckoning and we see the rug pulled then it would be far worse than if your startup never had any traction to begin with.
I'm fairly certain it's being extensively used simply because the opposite seems basically impossible to imagine. You have these bots capable of endlessly writing semi-believable comments that can dynamically adjust to context, to spin whatever you want to your desired narrative.
It's unimaginable that the military and government aren't already extensively using these systems. There's no laws against it (not that that seems to even matter that much anymore), there's no real cost if caught (as e.g. has happened with "influencers" being asked to make political points by politicians while also not disclosing that), and so on. It's basically near zero risk, low cost, high perceived reward, and completely doable. It's happening.
This could easily explain why social media, sites such as YouTube, and others increasingly often seem to follow cues from the political establishment more than general public sentiment as reflected in polls, "real life" actions, etc.. Of course there could also be biases driving such things, but those biases did not seem to exist, certainly not to the same degree, in the times before LLMs.
Are you sure it isn't? Can you categorically say that (for instance) the current anti/pro-palestinian discourse is not fueled by armies of AI bots? How about Russia/Ukraine? If they were easy to spot then they wouldn't be good bots, would they?
Militaries move really slow when it comes to adopting entirely new tech, at least in open warfare.
We likely won't see much from them until we suddenly have fighter jets flown by AI in the battlefield and autonomous systems handling at least a portion of the weapons targeting process.
I'm fairly certain he's referring to war propaganda, not bot driven war systems. A bot that drops a bomb on its own people 1% of the time is pretty useless. A propaganda bot that says viable nonsense 1% of the time is awesome because it just adds even more plausible deniability.
I seem to recall reading articles like this in 1997 about the internet.
“What’s there to do really?” they asked. Whatever annoyance somebody had with the state of the early web was magnified into a portrait of decline.
It was only for nerds. Or maybe too shallow with ugly amateur content. Or too commercialized already. Or maybe it was never going to be a successful platform for business because nobody is crazy enough to put their credit card number in an online form. Etc.
These contradictory complaints were all present back in the day, but in retrospect there was so much room for both technical and social growth that they just seem quaint now.
Why the productivity growth from the internet was so low?
As Robert Solow said “you can see the computer age everywhere but in the productivity statistics.”
The productivity gains from the internet have been surprisingly small or negligible so far. There was initial surge in productivity growth from 1996-2000, then productivity growth fell back to pre-internet levels.
IMO, as long as increasing headcount is a primary measure of managerial success and status, there won’t be meaningful productivity gains because managers spend profits on hiring people who are not necessary but make their fiefdoms look more important.
Managers hires managers so they don't have to work. Then those managers hires managers so they don't have to work. There is no end to how many managers you can have.
I have been curious about this for a while, particularly in relation to the increasing cost of training LLMs.
I was recently talking to a friend who works on the AI team for one of the large tech companies, and I asked him this directly. He said that each generation is ~10x the training cost for ~1.5X improvement in performance (and the rate of improvement is tapering off). The current generation is ~$1 billion, and the next generation will be about $10 billion.
The question is whether anyone can afford to spend $100 billion on the next generation after that. Maybe a couple of the tech giants can afford that, but you do rapidly get to unaffordability for anyone smaller than the government of a rich country.
It will likely be possible to continue optimizing models for a while after that, and there is always the possibility of new technology that creates a discontinuity. I think the big question is whether AI is "good enough" by the time we hit the asymptote, where good enough is somewhat defined by the use case, but roughly corresponds to whether AI can either replace humans or improve human efficiency by an order of magnitude.
A lot of people complaining about GenAI claim that results that are correct 90% of the time are useless.
That, to me, is not true. There are plenty of examples where a 10% incorrect answer is perfectly useful. For example in software with good test harnesses you can trust functions that pass tests whether they are written by staff, an intern or a bot.
But I do think that we have an AI bubble and need a brush fire to refocus on today's solvable problems. A 5-10% of industry effort going to moonshots is fine, 90% is crazy.
To generalise, the difference is in verification time / amortised time. If finding the real answer would take me an hour, but I can check a proposed answer in 5sec, I'm going to try at least 20 of the 90% answers before starting the work myself. On the other extreme, if verification would require finding the same original sources that contain the actual answer, then there's no point trying even the 99.9% correct answers.
I find these polarizing opinions confusing. There are a very large number of applications of generative AI. People believe a lot of them will produce a lot of value. So yeah, a lot of grifters abound. That part is not dissimilar to the cryptocurrency wave. But then to make statements that all of the current AI wave is a grift seems like throwing the baby out with the bathwater.
I for one am paying for generative AI in multiple areas such as image generation and code autocompletion. It's anecdata but it doesn't seem like I'm the only one willing to pay for this. I believe these valuable use cases that I legitimately think are worth paying for will stick around, and I think there are plenty more to still be discovered in the product space.
Finally, generative AI does have costs that makes it scale worse than the prices of the products using it in many cases (hence a lot of startups burning up their VC money too quickly on API costs), but there's a clear path to optimizing these costs by orders of magnitude, so again doesn't seem like an argument to say it's all a scam.
And I say that as someone who is shipping features that use GenAI at work because it's the current hype in the eyes of directors and will get me promoted.
Turns out something 95% right is as good as something 0% right.
Cryptocurrencies were no solution and AI will always be at best a heuristic, so no solution either. The hypesters dont (want to) understand the foundation of the problem they claim to solve.
It seems you get downvoted but I think you're right, that 95% means we can't trust it. We don't know where the 5% is where it went wrong, and we spend more time checking the output than just creating it ourselves.
"The company’s recent demo of its voice-powered features led to a 22% one-day jump in mobile subscriptions, according to analytics firm Appfigures."
That's an interesting figure, especially given that most of the exciting features in that demo (video analysis, the new voice input/output, improved image generation) are actually available to paying users yet.
I was one of the people who signed up for plus that day. It took me a while to realizing that the voice feature was not the same one being demoed. I think it was a bad move to roll it out like that.
This has been going on since the mid '60s. Last time the hype-cycle bust took with it the entirety of Lisp, which technology is gonna be killed this time around?
With the top scientists leaving OpenAI, it looks like we've reached a point where "AI" will be seeing how much juice OpenAI can get out of Nvidia cards: I suspect it'll get more accurate without us seeing the huge leaps we've seen this past year or so.
It is relevant to point out that NYT is engaged in a legal battle with OpenAI.
This article refers to "AIs like ChatGPT" which clearly shows who it is targeted at. For people thay don't read arxiv every day, the actual advancements are difficult to sift out of all the hype.
The dilution of the meaning of AI will contribute to the bubble's contraction when the general public realises that "AI" doesn't mean anything specific and definitely doesn't mean what they have in their imaginations.
Regardless of where we are today, it is undoubtedly worth the effort to continue the research. This hype cycle has kick started a lot of ideas that will benefit humanity. We still do have that pesky problem of corporations innovating while simultaneously ruining everything.
- LLM-type AI is becoming a commodity. Everybody's systems seem to be converging on a roughly equal level of performance. This includes the open source systems. So this isn't a high-margin business. They compare electric cars, where there are lots of manufacturers and prices are dropping. That's what success of a technology looks like.
- The economics may not work for ad-supported LLMs as part of search engines. It may cost more to deliver an answer than can be collected in ad revenue. That's a problem for Google and Microsoft. (The cost problem can probably be mitigated by recognizing simple queries and answering them with cheap searches.)
You ever read something that just leaves you confused as to how two perspectives can be so different? I mean, from down in the trenches, AI is doing great. In the last couple of years, context window sizes have increased to reasonable limits, ring attention has vastly improved needle-in-a-haystack performance, prices have come down, open source models are close to GPT-3.5 performance making local inference viable. And that's just LLMs; embedding models and vector databases are better and more available, generative voice actually sounds human, generative images and video are coming along. It's never been cheaper or easier to rent GPUs in the cloud. Two years ago, if data was in unstructured free text or in an image it would be a massive challenge to get any value out it and often not worth the effort; now, it's nearly trivial. How could anyone look at this and conclude, "AI is losing steam?"
I read it a couple of times and I think I figured it out. I'm looking at it as a technologist, while the WSJ is looking at it as an investment opportunity. When they say "losing steam" they mean "there's no alpha left." In other words, the smart money has already moved in, and the future value has already been priced in. This is a really telling quote:
> AI could become a commodity
That is a GOOD thing. It's called the ephemeralization of value[1]. Today, a HD color TV costs less than a black and white TV in 1950's. The WSJ would look at that and see a mere "commodity" because businesses are operating at lower margins and there's less money to be made, but billions of consumers benefit. Open source is the ultimate expression of ephemeralization, and it is inarguably a good thing when billions of people can benefit from state-of-the-art tech for free.
So yeah, maybe it's too late to get rich by investing early. Maybe a lot of the vaporware startups and thin wrappers around OpenAI's API will crash and burn soon. And yes, it's definitely been overhyped in certain regards and pushed into use cases it can't really support by promoters who neither know nor care what about the real capabilities and limitations. All of that in normal adoption curve stuff. The technology is definitely not losing steam; it's barely getting started. If you're a CS student, you 100% should be learning about vectorization, automatic differentiation, and differentiable programming, because the GPU isn't some sort of niche topic you can ignore. We've taken the von Neumann architecture as far as it can go, right up to the quantum limits, and now we have go parallel. These techniques are pointing the way to a simple model of parallelization that can actually take advantage of modern hardware without breaking your brain or getting lost is a maze of locks and semaphores. The "transformer" isn't the end-all-be-all of this paradigm - it just happens to be one technique that worked well in practice on free text data. There's so much more out there waiting to be discovered.
I can see the use of image-gen integrated into photoshop and krita. Codepilot and other code specific llms have not been useful at all.
That said I do find stuff like ChatGPT and Gemini useful enough to always be running a client I can send prompts to. It works great as a guide for things you know nothing about and can guide you on what resources to actually look for when you do actually care about having correct information.
I can see it being useful for things like self study and writing. For writing not in the same sense as copilot, I think it will be equally useless for anyone that is doing anything serious but it can help in exploration.
To me we have already reached the peak of what this type of 'AI' can do. Don't expect things to get much better. My mind is not going wild with the possibilities. Useful yes, revolutionary? no.