Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> which is not so much concerned that their software might be buggy as that it might be lame

This is not at _all_ my interpretation of Casey and JBlow's views. How did you arrive at this conclusion?

> They're more concerned about user experience and efficiency than they are about correctness.

They're definitely very concerned about efficiency, but user experience? Are you referring to DevX? They definitely don't prize any kind of UX above correctness.



From what I've seen, they are very much in a game developer mindset: you want to make a finished product for a specific use, you want that product to be very well received for your users, and you want it to run really fast on their hardware. When you're done with it, your next product will likely be 80% new code, so long term maintainabity is not a major concern.

And stability is important, but not critical - and the main way they want to achieve it is that errors should be very obvious so that they can be caught easily in manual testing. So C++ style UB is not great, since you may not always catch it, but crashing on reading a null pointer is great, since you'll easily see it during testing. Also, performance concerns trump correctness - paying a performance cost to get some safety (e.g. using array bounds access enforcement) is lazy design, why would you write out of bounds accesses in the first place?


One of the slides in Blow's talk about why he was starting work on Jai said, "If we spend a lot of time wading through high-friction environments, we had better be sure that this is a net win. Empirically, it looks to me like [that] does not usually pay off. These methods spend more time preventing potential bugs than it would have taken to fix the actual bugs that happen."

I think that's an overall good summary of the crowd's attitude. They think that mainstream programming environments err too far in the direction of keeping your software from being buggy, charging programmers a heavy cost for it. Undoubtedly for videogames they are correct.

Jai in particular does support array bounds checking, but you can turn it on or off as a compilation option: https://jai.community/t/metaprogramming-build-options/151


Undoubtedly? Modern video games are pretty good, and there's a lot of them. They're mostly written in mainstream programming environments. I don't see without explanation how the take is undoubtedly correct in that context.


Fortunately, well-respected game programmers like Muratori and Blow, who have written games that are more highly regarded by critics than BAC Skywalk and who also have more experience than you do, have spent thousands of hours providing that explanation. If you aren't going to listen when they explain it, you aren't going to listen to me either.


Thanks for checking out my portfolio! I have to admit, it's a bit out of date, but I also don't think that my background in games matters all that much in any specifics. I can talk to principles.

To try and understand where you're coming from, I'll make a few notes and you can tell me where you agree or disagree.

I think we'll both agree that Rust is probably not suitable for most parts of game development. Rust, while enabling "bug-free" refactoring with it's expressive type system, also imposes that type system when you don't want it. Oftentimes, when tweaking a game, you'd rather try things out in a state where things could break than be forced to finish the refactor every time, and Rust doesn't have particularly ergonomic escape hatches for that. If there are specific components with well defined inputs and outputs for your use case (say, maybe, an input system, or a rollback netcode engine), then I think Rust could be a good choice, but you'd also be paying when interfacing with whatever other tools you're using.

I think we'll possibly disagree with the idea that games need to be fast. They need to be fast enough to work! A stable 60fps is pretty much table stakes nowadays too, even in genres that don't benefit hugely from such low latency, in terms of player perception. But "fast enough" is a different bar in different contexts. Mario is fun, and in 2025 I would not need to put much effort in at all to make Mario fast enough in ~any context someone will play it in. On the other hand, I'd probably have to put a lot of work into a photorealistic open world RPG, or an RTS with tens of thousands of units. Many games live in between those two, and oftentimes there's reason to optimise some parts and not others. If a game runs at 60fps on a 5-year-old Android phone, which can often be achieved with Unity (modulo garbage collection hitches), I'm not going to spend extra effort optimising further.

Where I probably disagree most is that we currently err too far on the side of correctness. One thing you didn't note (and I'm not sure if Muratori or Blow talk to) is the difficulty in finding bugs. Games are a massive ball of mutable state and operations on it, and games are usually massively wide artifacts, played on a huge variety of target devices, that are hard-to-impossible to cover the full state space of. Bugs are often non-trivial to see when writing, see when testing, or notice if a reversion exposes them. If I were to guess, I've seen more time spent on resolving bugs than from writing features, from me and the devs I've worked with in my career.

I think in the iron-triangle-esque trade between "easy to write more game", "hard to write bugs", and "game runs fast", I personally bias towards the first two. Few games _need_ the latter, fewer still need it everywhere in the game. Scripting languages and higher-level langs like a C# are pretty ergonomic for games (and the latter specifically, outside the Unity context, is pretty good in terms of optimisation capabilites too).

I'm unsure what made you think that I'd be unlikely to want to listen or discuss things with you, so if you do have notes there I'd be happy to hear them too.


Interesting and thought-provoking.

I don't think Rust is mainstream enough to be what they were attacking, especially 6 years ago or whenever Blow gave that talk. Unity certainly is, and they seem to reserve special scorn for it, maybe because it's so popular.

I don't agree that it's easy to make Mario hit a stable 60fps in any popular gaming environment. In the browser, it's easy to hit 60fps but impossible to keep it stable. And, as you concede, it can be challenging with Unity (or Godot).

Latency is a separate issue from fps, even when the fps isn't janky. With your PC plugged into a Smart TV, you can hit a stable 60fps, but typically with two or even three frames of lag from the TV, which is a very noticeable downgrade from a 6510-based Nintendo connected to an RF modulator and a CRT TV from 01979. And often the OS adds more! Three 60fps frames of lag is 50ms. The one-way network latency from New York to London is 35ms. Most players won't be able to identify that there's a problem, but they will reliably perform more poorly and enjoy the game less.

I'm skeptical of the Muratori crowd's implicit assertion that this kind of responsivity is inherently something that requires the game developer to understand the whole technology stack from SPIR-V up. I think that's a design problem in current operating systems, where it definitely does exist. And, while I'm skeptical of their dismissal of the importance of bugs, I'm confident that they're representing their own needs as accurately as they can.

But probably it's better for you to engage with their explanation of their own ideas than with mine. I might be misunderstanding them, and I don't have their experience.


Jai has array bounds checking.


> This is not at _all_ my interpretation of Casey and JBlow's views.

IMHO this group's canonical lament was expressed by Mike Acton in his "Data-Oriented Design and C++" talk, where he asks: "...Then why does it take Word 2 seconds to start up?!"[0]. See also Muratori's bug reports which seem similar[1].

I think it is important to note, as the parent comment alludes, that these performance problems are real problems, but they are usually not correctness problems (for the counterpoint, see certain real time systems). To listen to Blow, who is actually developing a new programming language, it seems his issue with C++ is mostly about how it slows down his development speed, that is -- C++ compilers aren't fast enough, not the "correctness" of his software [2].

Blow has framed these same performance problems as problems in software "quality", but this term seems share the same misunderstanding as "correctness". And therefore seems to me like another equivocation.

Software quality, to me, is dependent on the domain. Blow, et. al, never discuss this fact. Their argument is more like -- what if all programmers were like John Carmack and Michael Abrash? Instead of recognizing software is an economic activity and certain marginal performance gains are often left on the table, because most programmers can't be John Carmack and Michael Abrash all the time.

[0]: https://www.youtube.com/watch?v=rX0ItVEVjHc [1]: https://github.com/microsoft/terminal/issues/10362 [2]: https://www.youtube.com/watch?v=ZkdpLSXUXHY


> Their argument is more like -- what if all programmers were like John Carmack and Michael Abrash? Instead of recognizing software is an economic activity and certain marginal performance gains are often left on the table, because most programmers can't be John Carmack and Michael Abrash all the time.

At least for Casey his case is less that everyone should be Carmack or Abrash but that programmers often through their poor design choices prematurely pessimise their code when they don’t need too.


> At least for Casey his case is less that everyone should be Carmack or Abrash but that programmers often through their poor design choices prematurely pessimise their code when they don’t need too.

I think this is far enough since Casey, unlike Blow, does offer some practical advice.


This is a bit of a simplification of the ideas of Blow, Muratori et al, a much better source for the ideas can be found in "Preventing the collapse of civilization" [0].

The argument made there is that "software quality" in the uncle bob sense, or in your domain version, is not necessarily wrong but at the very least subjective, and should not be used to guide software development.

Instead, we can state that the software we build today does the same job it did decades ago while requiring much vaster resources, which is objectively problematic. This is a factual statement about the current state of software engineering.

The theory that follows from this is that there is a decadence in how we approach software engineering, a laziness or carelessness. This is absolutely judgemental, but its also clearly defended and not based on gut feel but rather on these observations around team sizes/hardware usage vs actual product features.

Their background in videogames makes them an obvious advocate for the opposite, as the gaming industry has always taken performance very seriously as it is core to the user experience and marketability of games.

In short, it is not about "oh it takes 2 seconds to startup word ergo most programmers suck and should pray to stand in the shadow of john carmack", it is about a perceived explosion in complexity both in terms of number of developers & in terms of allocated hardware, without an accompanying explosion in actual end user software complexity.

The more I think about this, the more I have come to agree with this sentiment. Even though the bravado around the arguments can sometimes feel judgemental, at its core we all understand that nobody needs 600mb of npm packages to build a webapp.

[0]: https://www.youtube.com/watch?v=ZSRHeXYDLko


> it is about a perceived explosion in complexity both in terms of number of developers & in terms of allocated hardware, without an accompanying explosion in actual end user software complexity.

Do we want software to be more complex? Can you explain what you mean here? The explosion from my POV seems to be related to simply more software.

> at its core we all understand that nobody needs 600mb of npm packages to build a webapp.

Perhaps, but isn't this a different argument/different problem?

If the argument is that these software packages are bloat, which can be detrimental to performance (which BTW is a bank shot as you describe it here), we all understand we don't need npm at all to build a webapp. However, it might make it easier? Isn't easy really important in some domains?

Again -- software engineering is an economic activity. If Word startup speed was important then more engineering resources would be expended to solve that problem.

>> I think it is important to note, as the parent comment alludes, that these performance problems are real problems, but they are usually not correctness problems (for the counterpoint, see certain real time systems).

The thing is we agree that performance problems are real problems. The problem is imagining that they are the same problem for every programmer in every domain. A high speed trading firm or a game dev studio simply has different constraints than Microsoft re: Word or a web dev.

"Why does this software not behave like my (better) software?" is a good question. Unfortunately I think Blow, et. al, only give this question a shallow examination. Maybe one doesn't treat the engineering of a thermostat the same way one treats a creative enterprise like a game? Maybe the economic/intellectual/self rewards are not similar?


> Do we want software to be more complex?

No but the complexity of software should follow from the complexity of end user features. Essential vs accidental complexity. Some problems are complex, they require complex software. Some problems are simple, so the software should be simple. In an ideal world, at least.

> However, it might make it easier? Isn't easy really important in some domains?

Indeed, this is maybe a better wording of the problem: it is easier but not simpler. Easy is never important in a domain, except perhaps adversarial domains like marketing or sales. Easy is shortsighted. Easy is not economical because easy choices are not the right choice.

> If Word startup speed was important then more engineering resources would be expended to solve that problem

No this is a common misconception about economics: things do not at all behave rationally in supply/demand situations. People want the wrong things all the time, people act irrationally all the time, business don't know what value is all the time, large problems are ignored all the time.

> Unfortunately I think Blow, et. al, only give this question a shallow examination

I am not sure about this, maybe so. Either way, so do you: it is not at all about performance, but performance is the canary in the coalmine: it is a direct translation of the essential vs accidental complexity problem.

If I can serve you a webpage in 10ms, and you serve me that same webpage in 3000ms (excluding network latency), you are obviously solving that problem in a way that is an order of magnitude more complex than what I have proven is necessary to solve the problem. Either by involving more software, more hardware instructions, more infrastructure network hops, etc. In other words: performance is an easy objective metric for the complexity that lies behind (an otherwise opaque) piece of software.


> .. [I]t is not at all about performance, but performance is the canary in the coalmine: it is a direct translation of the essential vs accidental complexity problem.

This is all nice color on my commentary, but it fails to address the point of my two parent comments: programming is an economic activity. Sometimes a putatively more complex solution is the "right" solution for someone else, because it is easier to understand and implement, or fits within an existing workflow (it is more coherent and consistent).

Yes, if the performance delta is an order of magnitude, then yes, perhaps that is a problems for such software, but then again, maybe it isn't, because economics matter. Lots of people use 10+x slower languages because for loads of technical reasons, but also economic ones.

> In other words: performance is an easy objective metric for the complexity that lies behind (an otherwise opaque) piece of software.

Then presumably so is performance per dollar? Your argument can make sense, where the cost of a redesign is low (in cost of programmer education and experience and ultimately work), and performance benefits are high (10ms faster nets us 10x more dollars). That is -- Blow, et al/you, need to show us where these, "easy", if you will, 10x gains are.

Again -- I agree performance problems are real problems, and data oriented design is one way to reason about those problems, but Blow's marketing exercise/catastrophizing (see "Preventing the Collapse of Civilization") hasn't solved any problems, and is barely an argument without an analysis of what such incremental improvements cost.


> This is all nice color on my commentary, but it fails to address the point of my two parent comments: programming is an economic activity

I've mentioned the economics multiple times now, while you're still hung up on performance, I'm not sure why. Again, performance is an indicator of a perceived deeper underlying problem. The underlying problem is not performance, though that's the surface level gripe that is mentioned. There is no part of the argument that advocates you should redesign a specific piece of software to be faster. Rather, the argument is that our collective ability to make good software is deteriorating.

The underlying problem is nebulous and hard to catch and prove because it is hard to reason objectively about a real program in relation to hypothetical other programs that could compete with it. This makes the Muratori/Blow argument similarly nebulous and their (intentional or not) judgmental attitude does not help in the communication. I am aware that this argument is not iron clad or even clear or that the judgmental attitude is in any way warranted.

So, why does it even make sense to talk about this then? Because if there is an alternate universe where we can actually solve the same problems with vastly simpler logical structures, we should strive to make that reality precisely because of the economics, because simpler logical structures beat the pants off complexity in terms of predictability, investment, ROI, etc.

So to summarize, this is the argument (as I perceive it):

1. lots of software is slowing down over time, e.g: same problem is solved with more resources

2. More resources means not just waiting for stuff to be done, but likely also more complexity (resources are spent doing something, hence there is more to be done, hence more complexity).

3. If the same problems are solved by involving increasingly more complex software over time, there is a likelihood that we are writing software (even new software) in a more complex way than necessary, and that its getting worse over time.

4. We should figure out if that observation is true, and what we can do about it, before the cost of building software (economics) becomes prohibitive. (e.g dramatized as the collapse of civilization)

A lot of assumptions are made in 1 and 2.


By reading their blog posts and watching their videos.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: