Netflix shows are forced to become soap operas because although they might have the budgets approaching prestige TV, Netflix might force lengths that push the story into the soap format.
There must be more to that story. Netflix, HBO, BBC and a bunch of others keep pumping out these kinds of crime dramas exactly because they are quite conservative bets. They are extremely cheap to produce, a handful of mid-range actors on very mundane locations. They can stay as a miniseries or expand later on as wanted. And if the writing is good some of them become incredibly popular and profitable.
I mean, the last Stranger Things series, of all things, is the single most expensive production in history. More expensive than Marvel films, than both Avatars, than all of Games of Thrones, than Rings of Power per episode… It’s mad, for a quaint small-scale mild-horror story set in the 80s.
There is no way Mindhunter was simply too expensive.
Fincher shoots for very long, does a lot of takes, lights everything like a film. Likely spends a lot of time in the editing room either himself directly or tinkering with the directors that direct the other episodes.
Notice how Mindhunter didn’t “look” like other Netflix shows. The reason for that is they lit it like a movie. And that takes time and money.
I work in the industry. The reason Netflix shows look a certain way is because they are not given the time to do it differently and are shooting almost documentary style or at least much much faster than a regular “prestige” show. Now a good director DP duo can still make this look good, even though it’s hard to do 20 set ups (low budget speed) instead of 5-10 a day (high budget). But that velocity means you shoot at twice the speed. Which is huge considering film costs are people costs. Production is often the expensive segment of a show like Mindhunter.
Fincher likely wouldn’t have agreed to drop episode count or shoot them faster, so they didn’t continue.
Well then the usage is already so useful in Free mode that I didn’t even notice it. “Thinking ” has a meaningful cap. But I have not felt the need to pay for more. I pay for Claude.
That must've changed very recently then, even just a month ago I'd have Gemini (2.5 Pro) run into a daily limit after just 3-4 messages as a free user.
The learning curve is actually huge. If you just vibe code with AI, the results are going to suck. You basically have to reify all of your software engineering artifacts and get AI to iterate on them and your code as if it were am actual software engineering (who forgot everything whenever you rebooted it, so that’s why you have to make sure it can re-read artifacts to get its context back up to speed again). So a lot more planning, design, and test documentation than you would do in a normal project. The nice thing is that AI will maintain all of it as long as you set up the right structure.
We are also in the early days still, I guess everyone has their own way of doing this ATM.
By this point you've burnt up any potential efficiency gains. So you spent a lot of hours learning a new tool which you then have to spend a lot of additional hours to babysit and correct, so much that you'll be very far from those claimed productivity gains. Plus the skills you need to verify and fix it will atrophy. So that learning curve earns you nothing expect the ability to put "AI" somewhere on your CV, which I expect will lose a lot of its lustre in 1-2 years time when everybody has made enough experiences with vibe coders who don't, or no longer can, enusre the quality of their super-efficient output.
Speaking as someone with a ton of experience here.
None of the things they do can go without immense efforts in validation and verification by a human who knows what they're doing.
All of the extra engineering effort could have been spent just making your own infrastructure and procedures far more resilient and valuable to far more people in your team and yourself going forward.
You will burn more and more and more hours overtime because of relying on LLMs for ANYTHING non-trivial. It becomes a technical debt factory.
That's the reality.
Please stop listening to these grifters. Listen to someone who actually knows what they're talking about, like Carl Brown.
That’s interesting but how much of this if written down, documented and made into video tutorials could be learnt by just about any good engineer in 1-2 weeks?
I don’t see much yet, maybe everyone is just winging it until someone influential gives it a name. The vibe coding crowd have set us back a lot, and really so did the whole leetcode interview fad that are just throwing off. It’s kind of obvious though: just tell the AI to do what a normal junior SWE does (like write tests), but write a lot more documentation because they forget things all the time (a junior engineer who makes more mistakes, so they need to test more, and remembers nothing).
The concepts in the LLMs latent space are close to each other and you find them by asking in the right way, so if you ask like an expert you find better stuff.
For it to work best you should be an expert in the subject matter, or something equivalent.
You need to know enough about what your making not just to specify it, but to see where the LLM is deviating (perhaps because you needed to ask more specifically).
There is, effectively, a "learning curve" required to make them useful right now, and a lot of churn on technique, because the tools remain profoundly immature and their results are delicate and inconsistent. To get anything out of them and trust what you get, you need to figure out how to hold them right for your task.
But presuming that there's something real here, and there does seem to be something, eventually all that will smooth out and late adopters who decide want to use the tools will be able onboard themselves plenty fast. The whole vision of them is to make the work easier, more accessible, and more productive, after all. Having a big learning curve doesn't align with that vision.
Unless they happen to make you more significantly productive today on the tasks you want to pursue, which only seems to be true for select people, there's no particular reason to be an early adopter.
- we are far removed from “early adopter” stages at this point
- “eventually all that will smooth out…” is assuming that this is eventually going to be some magic that just works - if this actually happens both early and late adopters will be unemployed.
it is not magic, it is unlikely to ever be magic. but from my personal perspective and many others I read - if you spend time (I am now just over 1,200 hours spent, I bill it so I track it :) ) it will pay dividends (and also will feel like magic ocassionally)
been hacking 3 decades so exponentially north of 1,200 hours ... in my career the one trait that always seems to differentiate great SWEs from decent/mediocre/awful ones is laziness.
the best SWEs will automate anything they have to manually do more than once. I have seen this over and over and over again. LLMs have take automation to another level and learning everything they can be helpful with to automate as much of my work will be worth 12,000+ hours in the long run.
What is this fantasy about people being unemployed? The layoffs we’ve seen don’t seem to be discriminating against or in favor of AI - they appear to be moves to shift capital from human workers to capex for new datacenters.
It doesn’t appear like anything of this sort is happening and the idea that good employer with a solid technical team would start firing people for not “knowing AI” instead of giving them a 2 week intro course seems unrealistic to me.
The real nuts and bolts are still software engineering. Or is that going to change too?
I don't think their will be massive unemployment based on actual "AI has removed the need for SWEs of this level..." kind of talk but I was specifically commenting on eventually all that will smooth out and late adopters who decide want to use the tools will be able onboard themselves plenty fast. ... If this actually did happen (it won't) then we'd all have to worry about being unemployed
It’s the opposite. The more you know to do without them the more employable you are. AI has no learning curve, not at the current level of complexity anyway. So anyone can pick it up in 5 years and if you’ve used it less your brain is better.
With all due respect, claiming “AI has no learning curve” can be an effective litmus test to see who has actually dig into agentic AI enough to give it a real evaluation. Once you start to peel back the layers of how to get good output you understand just how much skill is involved. Its very similar to being a “good googler”. Yeah on its face it seems like it shouldn’t be a thing but absolutely there are levels to it, and its a skill that must be learned.
"That is pretty much exactly the signature of a failing solid state relay or contactor on the shared avionics power bus (upstream of both FDR and fly by wire)."
It's a bit of a weird edge-case, but the very popular Battlefield 6 is partially a Godot game. It's an odd hybrid of a proprietary in-house engine with Godot grafted onto it, which serves as a public-facing SDK for players to build their own content. I know that's not exactly what you meant but it is an interesting application in a major AAA title.
Battlefield 6 of all things includes Godot as core of the Portal map-building. Casette Beasts is what Pokemon wishes it was. Upcoming Planetenverteidigungskanonenkommandant looks gorgeous from the previews.
I don’t know if I could list something that matches say Cuphead or Silksong, but I do think that Godot is currently on a Clayton Christiansen-style worse-is-better ascent right now.
Maybe a bit of an exaggeration. But I think at least 30%. Unreal is popular too. Unity seems to be more popular for indie/coop/single player/certain art styles. There seems to be many more unity games overall, but a lot of them are very small.
reply