Hacker Newsnew | past | comments | ask | show | jobs | submit | jchonphoenix's commentslogin

You miss the major factor in your compensation: pricing pressure due to supply/demand.

By removing all the junior engineers, you've fundamentally changed the market forces longer term and most people expect that to negatively impact you in the supply demand curve regardless of whether or not the statements you've made above are true, which they most likely are for senior engineers.


In removing junior developers, leaving only senior developers, wouldn't that reduce supply, making the price go up, not down? It's been a while since Econ 101 for me though.

This is CMU so they would be at the bleeding edge just like MIT/Stanford. But I think all the schools are behind today


Zico is Chair of the ML Department at CMU, which arguably has had the largest contribution to foundational ML academia.

OpenAI board member is his least interesting credential


That would sell the course for me.


The author misses the forest for the trees. He's accurately articulating the current state of tools he's using but isn't acknowledging or extrapolating the next derivative I.e the rate of improvement of these tools.

That being said, everything is overvalued and a lot of this is ridiculous.


> He's accurately articulating the current state of tools he's using but isn't acknowledging or extrapolating the next derivative

Extrapolation would reasonably show that they're reaching an asymptote, graph cost vs improvement on a chart; you'll see that they are not proportional.


I think you are the ine missing the forest for the trees.

- The energy efficiency and cost improvements of LLMs has plateau-ed as of late. https://arxiv.org/html/2507.11417v1

- The improvements from each subsequent model have also plateau-ed, with even regressions being noticeable

- The biggest players are so wildly unprofitable that they are already trying to change their plans or squeeze their current fanbase and raise their rates

https://news.ycombinator.com/item?id=44598254

https://www.wheresyoured.at/anthropic-is-bleeding-out/

- And, as it turns out, experienced developers are 19% less productive using LLMs: https://www.theregister.com/2025/07/11/ai_code_tools_slow_do...

> I.e the rate of improvement of these tools.

They have stopped improving to match for the increase of the rate their costs and benefits. It's simple mathematics, improvements in efficiency don't match the increases in costs and the fact that they are extremely unprofitable, and all that data points to a bubble.

It's one of the most obvious bubbles if I have ever seen one, propped only by vibes and X-posts and Sama's promise that AGI is just around the corner, just inject a couple trillion more, trust me bro. All that for a fancy autocomplete.


This entire post smells like someone who's salty and trying not to face reality. I might not even disagree entirely with what's being stated here but the framing is just clearly wrong


Anysource.dev is the answer


Who runs anysource.dev?


Data engineering completely automated by a startups technology. This dataset sells for hundreds of thousands a year and can be fully cleansed and prepared from raw data purely with AI.


My guess is the telemetry data.

OAI spends gobs of money on Mercor and Windsurf telemetry gets them similar data. My guess is they saw their Mercor spend hitting close to 1B a year in the next 5 years if they did nothing to curb it


Meta tools are best in class when the requirement is scale. Or that the external tools haven't matured yet


OpenAI initially raised 50m in their institutional round.

1b was a non profit donation, so there wasn't an expectation of returns on that one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: