It omits that the prisoners are deceased and uses the term "vanish" which implies it's not known where the organs have gone.
"Alabama prisoners' organs vanish, and there's a whole lot of passing the buck" could easily be "Deceased prisoners' organs taken by University of Alabama against families wishes" but that's not going to get the clicks.
The overhead of cgo seems like it would not be an issue if you could pass in an array of vectors and moved the outer loop into the procedure instead of calling the procedure for each vector. This may only be viable if there is a low overhead way to pass arrays back and forth between go and C. I'm no expert but a quick google makes me think this is possible.
I get that hand rolling assembly is fun but as a theoretical future maintainer I would much prefer C to ASM.
What's more, I have no complaints about cgo. It hasn't been a performance problem at all (my game runs smoothly at 240hz). And the interface for binding to C is quite simple and nice to work with (I made my own bindings to SDL2, OpenGL, and Steamworks).
Cgo did make cross-compiling trickier to set up, but I managed to do it with a fair amount of elbow grease (Windows and MacOS builds on my Linux desktop).
https://github.com/ir33k/gmi100/blob/master/gmi100.c#L27 definitely threw me for a loop until I realized it was a line saving trick. It would be more readable to save lines elsewhere by exploiting the comma operator instead of essentially cramming irrelevant statements into a conditional.
I am not a lawyer and please correct me if I'm wrong but as I understand it rules and mechanics are not copyrightable. What is copyrightable is the flavor text and specific compilation of the rule set into the "Rule Book". The rule book becomes a copyrightable work. This means that your are within your legal right to make a game called "Dragons & Daggers" where all the combat mechanics, rules, etc are identical to Dungeons & Dragons as long as the "glue" text and flavour text is original content. It seems to me like a community fork licensed under CC that plays exactly the same is not impossible.
I think you need to be a bit more humble on a topic that's clearly going over your head.
Imagine a scenario where you are testing out a brand new RISC-V development board. The vendor has only provided a C compiler for this board as is often the case. You want to be able to use the zig language to write programs for your new development board but no zig compiler exists for this board yet. That means you need to compile the zig compiler from source. The latest version of the zig compiler is written in zig. Again you don't have a zig compiler so how will you compile the zig compiler from source? You need a way to go from C compiler to Zig compiler. That's what this is describing. It does not make sense to maintain two completely separate versions of the compiler. The "real" one written in Zig and the "bootstrap" one for compiling from C. So the zig source is compiled into WASM and stored in the repo. On a system with only a C compiler this WASM can be ran instead of a native zig binary. The WASM version can then be used to compile an actual native zig binary.
I wonder if it's possible this was an optical illusion. Seeing optical illusions where oil tankers are hovering in mid-air has changed my perception of what people could see and how it could be misinterpreted. [1] I'm not a physicist but it almost looks like the photo could be a mountain and it's reflection somehow projected up into the sky similar to how the oil tanker is projected in the sky.
I've seen a few large Rails codebases that included a state machine library like mentioned in the article. Every single one was worse because of it. It pushes the code down a path where hidden hooks run when magic methods are called. At this point in my career I'm just done with that type of "clever" code.
The article starts with examples that just define the state machine by hand. This is a much better approach and scales to larger code much better. You can grep "def start!" and just read what the method does. A state machine DSL is really not providing much value and eventually just gets in the way.
Funny, I have spent a ton of time around Rails code where I dearly wished we had a state machine, because the alternative was an unorganized cluster of home-grown "state transition" glue without any consistent way of handling it, with all the weird-ass edge cases and split-brain BS that come with it.
The quip I like best for this is: “Any sufficiently complicated model class contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of a state machine.”
Lots of models I've encountered eschew being organized as state machines in favour of having "if salad" strewn throughout their code. That being said, refactoring to a simple state machine and trying to maintain it as such in perpetuity isn't always the correct solution.
Sometimes, a hierarchal state machine is needed, and if it is expressed as a simple state machine, it's just as messy.
Sometimes, a portion of it needs to be a state machine, and the right thing to do is to delegate some of the methods to a strategy that reflects the current state, but not all of them.
Sometimes, the whole thing is just too fat, and a state machine won't save it, the right thing to do is to get very aggressive about refactoring to a composite object.
Any time you have a big, messy model, it's very easy to write a blog post espousing a single solution, like this one:
But the reality is that a big, messy model is always going to be some kind of problem, and unless you can break it down into parts, you're going to have a problem. A state machine is conceptually a way to break a big thing into parts based on its "state," but that's just one approach to factoring.
p.s. Another problem is that even if a simple state machine is the right answer, "rolling your own" usually isn't. Grab a well-tested and documented library already. This isn't your passion project, this is industrial programming. Rolling your own is one of the best ways to learn how state machines work. Once you've learned how, reach for a professional tool.
State machines aren’t exactly rocket science though, I don’t necessarily think writing one for your specific preferences/use cases is as bad of an idea as, say, writing your own ORM. It’s a pretty well understood concept.
They aren’t exactly rocket science, agreed, but there’s an interesting trap here:
Implementing classes on top of pre-ES2015 JS wasn’t exactly rocket science either, so there was a “Cambrian Explosion” of home-grown implementations and OSS libraries all over the place. Frameworks like Ember.js rolled their own too, and everything was incompatible with everything else, plus while the basic principles were the same, the details differed from implementation to implementation.
And when organizations are rolling their own, they tend to do just enough to serve their immediate needs. As their needs grow, different internal contributors add patches and bodges to the implementation, using different approaches.
Over the long haul, even though the concept is simple enough to roll your own state machine, it’s often a win to build on top of something which is well-supported and can grow with your organization.
I've been on the receiving and perpetuating end of state machines and it's painful to have to deal with one that started years before you got involved and easy to think that you can do better. I've got a modest one in a side project and due to it being a side project I touch it once every couple of weeks and already it is making me uncomfortable. And it's dirt simple and I wrote it!
There's a take on state machines in a railsconf talk further down in the thread and it seems like an awesome way to reap a lot of the benefits while keeping a grip on the complexity. Sometimes complexity can't be buried safely and you kind of need to frame it instead. I'd argue that state machines bury that complexity but breaking those states into ruby objects w/ their own validation frames and delineates them instead.
More generally, I would recommend to avoid any DSL. All DSLs are a downgrade over ruby language and there is a really narrow list of opportunities where they are an upgrade.
Everyone knows ruby when working on a Rails application. Everyone knows what a class, an instance is and what methods are.
Ruby is extendable, incredibly flexible and it doesn't need "special options" to be extended.
DSLs every time attempt to recreate the flexibility of ruby and require studying a new mini language.
When I'm in a dsl, I don't know if I can use an instance variable, if I need a method I don't know if I can have one.
More often then not, this question is not answered by the documentation and digging in the source code is needed.
Most libraries with a DSL violate the first rule of a DSL: a DSL method should delegate to a normal object performing the operation, so that DSL API has also a corresponding normal ruby API. If you do this, your DSL is now extendable and replaceable, it can also be integrated into other libraries.
As for state machines, the libraries recommended push toward objects violating the single responsibility principle. A state machine has already a super important responsibility: state-transitions. That's already a lot of work, if it has to trigger side effects, now it does way too much.
Some alternatives: pub/sub! let the state machine be observable, whatever needs to do work when a certain state is reached, can do it when the event is published
Alternatively as someone recommended below, service objects take care of: transition, persistance of the new state and side effects. If a new action is needed, a new service object can be written. The state machine can be used to validate the transition while the service object takes care of the rest.
Using explicit state action services, which are well tested and intentionally invoked are the ways to go. They encapsulate the code, and don't create these callback hells when things inevitably get complex.
The problem with these types of gems is that they're good for refactoring functional style code that takes an input and produces a result and has "few side effects" as they mention, but this is generally not the type of code you find in an old legacy rails application that you want to refactor. The whole thing that makes these refactors tricky is that they can just write to the db anywhere in a non obvious way.
If you naively go about refactoring something like:
POST /users
Thinking all you need to do is match the output of the current request and match the record it makes in the users table you're potentially missing a ton of side effects you didn't even know about.The problem has always been the side effects and until one of these gems can track and compare side effects in an intelligent way I don't see how they are that useful.
Altering expected framework functionality. E.g. a custom getter method for an ActiveRecord attribute.
Technically this isn't a Rails-only thing; but since Rails relies so much on convention, there are many opportunities to break it (and possibly bring suffering to other devs).
Besides the other examples already given, monkey patching is another offender, as are proxy tricks with methods like send or respond_to (technically not Rails-specific but I associate this style with Rails).
I mean, this can be any highly expressive language. Common Lisp macros can get crazy, Haskell has sugar on sugar, Perl all looks like code golf to me, etc...
What if I consider those to be the same thing? Programming is an alchemical process through which we can contribute to humanity beyond the time-limit imposed by our physical bodies: https://en.wikipedia.org/wiki/Great_Work_(Hermeticism) :)
"The magnum opus is pre-eminently the creation of man by himself, that is, the full and complete conquest which he can make of his faculties and his future; it is pre-eminently the perfect emancipation of his will."
You are right, but there are many many methods in any codebase that really are, for all intents and purposes, read-only - ie, get a bunch of data from data sources, stitch it together, and return a json blob.
These tools would make me feel a lot more confident refactoring code like this (actually, right now I use Postman automated API response recordings with a manually extracted dataset of representative query parameters from production logs to validate our changes) - so this indeed looks like it might make my life easier :)
Postman supports dynamic variables for tests [1]. Of course nothing requires you to use postman, you could just as easily script this with just ruby and curb/httparty.
The reason for this is that a lot of api endpoints support a large number of parameters, and we want to be sure that we test a decent sample of real world usage instead of only a few examples that we can think of to implement in controller/request tests.
We can take a month’s worth of request logs from production, deduplicate them, run the requests against prod and record responses, then run same requests against stage and verify responses match even after you rewrite half the backing queries. This helps a lot with peace of mind when we deploy :)
For another, more advanced approach to this idea, see traffic mirroring/shadowing [2].
Sometimes I find it a little depressing that (at least through my laymen eyes) we are nearing a technological plateau and that more research into physics is unlikely to get us to a world describe in traditional science fiction with FTL drives and large metal spaceships that can take you from planet to planet.
Then something like this reminds me that if we as a species were able to unlock the secrets of bio-chemistry (not sure if that's the right term) it would be a game changer unlike any seen so far. And the fact that there is a huge corpus of evidence out there in the world called "life" proving some of the possibilities already gives me hope that while we may never have FTL, the future could still be pretty wild.
Max Planck was famously discouraged from studying physics by one of his professors because "in this field, almost everything is already discovered, and all that remains is to fill a few holes." [1]
Having studied physics myself, my opinion is that we may very well be at a similar point right now. The big advancements of the last century in physics (quantum theory, relativity, chaos theory, etc.) brought us an era of swift and sweeping technological progress, and now the easy fruit seems to have been plucked. But there are still plenty of known unknowns, dark matter and dark energy being perhaps the most prominent one. Who knows what unknown unknowns are hiding behind those known unknowns?
> Having studied physics myself, my opinion is that we may very well be at a similar point right now.
We're not anywhere near a technological plateau, we just lost track on funding. Until the fall of the Soviet Union, the US invested a lot of money in foundational research, often not even caring if it would prove useful or possible, and with big enough money behind it that people could plan careers.
These days, researchers have to waste half their working time to chasing the few grants that are still available, and forget about a stable career, job security or enough work-life balance to found a family.
It's really too sad ... I (PhD on CompSci) could helping on the research of something groundbreaking for humanity instead of "maximizing shareholder profits". But Academia basically sucks in its current state, and in my country there is less than 0 capabilities to do real research.
I do want us to pour money into foundational research, but form an outsider's perspective, it does seem like a lot of it does require increasingly large capital costs with things like the LHC, and feels all so theoretical.
I think it's worth every penny, but at first glance it feels incredibly abstract and disconnected from practical application, as well as expensive. (Though, to be honest, I just looked up the LHC cost and $9Bn USD doesn't feel expensive. I was expected it to come up in the hundreds of billions.)
Lord Kelvin famously said there were just two "clouds" in left to physics—two mysteries remaining to explain. Those two mysteries let to relativity and quantum mechanics.
There's also this famous quote that is frequently mis-attributed to Kelvin: "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement".[0] (I'm not sure who actually said it.)
Time. We still don't think about time properly - there are likely some huge technological gains if we can unlock time in relation to physics (not in terms of sci fi time traveling).
This isn't entirely accurate. We will not be able to visit the vast majority of the universe even given infinite time if we are not able to travel faster than light. Like 94% of the universe is unreachable without FTL, even without time constraints.
I've never actually looked up what the "quadrants" are in Star Trek. Apparently just our one galaxy divided into four. 6% of the universe is indeed an unfathomably large amount of space.
It depends on what you mean by "understanding". We can explain using a specific set of rules how something works. It doesn't mean that those rules are the best way to explain it or that they are even correct.
For example, we could explain that electricity works because of how electrons move, which would be correct from our point of view, but if we find out that we were living in a simulation, then the explanation would be that this is how "electricity" was coded to behave.
Also, usually in physics a formula is thought to be correct until some new laws/rules are found, then the formula is updated by adding some extra terms and then again thought to be correct.
To summarize: we know how to smash two particles together, but not much about what they are made of. Replace particles with stones and bones. 10000 years of science progress and we are still smashing things. With the occasional lab accident like discovering that mold kills bacteria.
One big difference—the discoveries waiting to be made in the early 20th century all concerned regular matter. As a result, once they were made, they enabled huge technological advancements.
>we are nearing a technological plateau and that more research into physics is unlikely to get us to a world describe in traditional science fiction with FTL drives and large metal spaceships that can take you from planet to planet.
strange to me how nobody questions why beavers only make damns a certain size, birds only make nests and don't go beyond that in complexity.
So many people seem blind to the idea that humans might be near their intellectual limit as a species and assume we will just keep progressing technologically. For all we know it's possible we hit a brick wall in terms of progress. Average human struggles with calculus, what if there was a species that could do advanced math as easily as we do 2 + 2?
Seems the limit for human advancement is tied to rate of learning, life span, and general cognitive ability. If you want more advanced tech you need to focus on those problems
I've often wondered about this. My suspicion is that there is a limit to the complexity of mental models that humans can fluently manipulate and I think we're starting to bump into it in some cases.
I think we will eventually need a paradigm shift from science being built around human grokable models (e=mc^2) to external human manipulatable models (ie, large scale machine derived models that we can't actually grok but can use for analysis and engineering). I think we're already starting to see this - there are already mathematical proofs that are so large and complex (in the GB range) that they had to be found by automation and only other automation can verify them.
We have tons of numerical simulations in engineering. Light modulation alone, just 3-5 lenses with different qualities can occupy a modern processor for a few hours.
> ...strange to me how nobody questions why beavers only make damns a certain size, birds only make nests and don't go beyond that in complexity.
Isn't most of this kind of just a matter of fitness, same as why birds become flightless on islands where there are no predators which demand flight to escape from? Basically, building anything more than a minimally-viable nest or a dam requires using energy that could be invested elsewhere to greater evolutionary advantage.
Humans have gone beyond because for as long as we can remember, we've always had vast, vast surpluses of energy, initially through the cooking of meat and agriculture, then via animal labour, and then finally via fossil fuels.
I love the analogy, but I think it flawed: the limits are practical and excess just adds risk.
On the flip side, nothing seems more exemplified by humanity than a zeal for doing a thing as big and grandiose as possible: for curiosity, for business, for art, or just for sheer vanity.
I don't think we've seen how far those will take us yet, even w/o improvements to the bottlenecks you suggest. I do agree that those "meta" fields matter and will make a huge difference.
> strange to me how nobody questions why beavers only make damns a certain size, birds only make nests and don't go beyond that in complexity.
Probably because beavers and birds have made dams and nests the same way for the past 100 years, whereas humans in the same time have developed a bunch of tools and can specialize and distribute the fruits of their expertise without requiring others to be experts themselves.
Perhaps it's not true that on average we know more e.g. math now than we did 100 years ago, because there are so many more people. I believe we are nonetheless much better at teaching and learning now.
It's more than possible that all of this growth will be our downfall, and that that will regulate our growth, however.
It would do little, I think. Not because people get less smart with age, but because with years they establish themselves in their field, and become more conservative and less willing to shake the status quo.
I normally buy into this sort of logic, but there's a fundamental difference. We experience the world in a way that recognizes beavers' and birds' limits, whereas they do not. We can modify ourselves and our environment in a way and changes our limits. Perhaps if the world is a simulation, then there are hard limits, as we are but bits in a computer so to speak, but even then it's not certain - we could become aware of the world outside the simulation and learn to manipulate it thought I/O mechanisms.
Average humans struggle with calculus because we have instructed average humans that calculus is hard. If we taught it to 12 year olds as a routine matter, average 12 year olds would know calculus.
You can teach smart (not average) 12 year olds the basic rules how to compute derivation or primitive function, but I doubt they are capable of distinguishing between, say, continuous and uniformly continuous function. Which is actually pretty important when trying to reason your way around calculus.
> strange to me how nobody questions why beavers only make damns a certain size, birds only make nests and don't go beyond that in complexity.
Unlike humans, beavers and most species of birds don't work cooperatively, which means they can't separate the workload needed for survival (e.g. one group hunts, one group builds dams, one group does childcare).
We'll reach the stars thru life sciences. Future humans will become space and time adapted. Hardened against radiation. Metabolism so slow that years will feel like minutes.
I'm not so sure that is the case, simply for economic and social reasons. Climate change is a much more tractable and immediate problem, yet technological developments and their implementations still seem to be moving too slowly to matter at the moment.
Solar has dropped 50-75% in cost in the last decade, and accounts for 10x more wattage. Battery capacity has doubled in that time. Wind energy capacity has doubled. Geothermal capacity is 1.5x. Electric cars are 4x more common than they were 5 years ago. Carbon sequestration has advanced at a technological level, although production hasn't seen serious advances (probably because renewable energy produces a profitable resource, while sequestering just exchanges money for fighting climate change).
If that's not enough to make a difference, it's because we started too late and the problem is too large, not because technological development is too slow. Admittedly, nuclear could have done the job already, and the issue there is social.
If human lifespan technology moved at half the climate change technology speed, we'd have 25 extra years per decade and be effectively immortal today.
It seems to older me that punctuated equilibrium is some kind of natural law.
Incremental progress may be ideal. Alas, whatever forces that may be, trying to preserve the current equilibrium, fight off change. Until the compulsion to change overwhelms the system.
Lather, rinse, repeat.
So when humanity finally goes carbon negative, it'll be despite the opposition, because they couldn't defend the status quo any longer. Then all that bottled up change will be like a dam bursting.
We can send persons without sending humans. With a good enough brain-computer interface we should be able to duplicate our brain contents to a digital medium, which we know can travel to interstellar space and beyond
Complex computers break down too. It might well be true that any computer capable of approximated human intelligence is even more fragile than normal human.
I don't think we are capable building computer system (and that includes power system for running it) right now that would last few hundred years without any maintenance, even here on planet.
Or it would at least be very non trivial to build it
we currently have at least 2 functioning computers in interstellar space that are 44 years old. I think we are already at a point that we can make centuries-lasting computers
We are far away from such an interface and space travel takes still far too long. When the first probe reaches another galaxy mankind is probably already gone or we are back in post war dark ages.
whats the purpose of conscience in this planet? purpose is not necessary, though one could say discovery is a purpose, boldly go where no man has gone before
I wonder if people will view it as sufficient that a digital copy of 'them' (or at least something identical to them at the point of copying) exists, despite their original biological minds eventually perishing.
It excites me to think about discovering the origin of our consciousness and being able to transfer that.
Ya, that'd suck. But consider. Tardigrades are pretty tough. And elephants have x10 more cancer fighting genes than us apes.
Pretty soon, parents will be picking their kid's eye color and temperament. For better or worse. Surely future humans will become a great deal hardier than ourselves.
Sometimes I find it a little depressing that (at least through my laymen eyes) we are nearing a technological plateau and that more research into physics is unlikely to get us to a world describe in traditional science fiction with FTL drives and large metal spaceships that can take you from planet to planet.
The first big project. After that there will be movies explanations stories expectations all that stuff. And to some degree it's already out there. We have those things now related to travel that's not faster than light.
And tell somebody put something together for real, FPL is just a whole lot more sexy.
I've read a couple scifi stories about sophisticated alien civilizations with FTL drives who were then shocked when they found out humans were just folding space and had instantaneous travel. HFY! Obviously it's fiction, and who knows if it'll ever happen or if it's possible - but my point is that we don't know what we don't know, and it could be way cooler than we can even imagine.
Well, using that analogy, I would say, it took us indeed great efforts to reach it, but now we have vast land to colonize - meaning applying all that groundbreaking research into everything. There are so many more technologies avaiable, than just what you can buy on the market.
Sci-Fi is very possible.
edit: oh and about FTL:
I know I do not really understand quantum physics and co. but I think I understand, that no one really understands it yet - so I do not expect FTL in my lifetime, but I would not rule it out.
Disagree that we are reaching a technical plateau at all. Maybe in some parts of particle and AMO physics, but cosmology continues to advance and we are continuing to learn a lot.
Don't be. Science usually hours plateaus but if we've learned anything from history is that there's always mountains ahead. The problem is we're searching on the dark. A flat spot doesn't mean we're at the top. In fact, we even know there's a lot of mountains ahead, we just aren't sure how to climb them yet or what we'll find along the way. But that's no reason not to climb them.
It might help if we had literature fleshing out some ideas of how alternatives could work. Ie how we could become a species more in harmony with a large biosphere - ala Jim Henson’s Dark Crystal. Though it’d have to be a human way of life.
> a species more in harmony with a large biosphere
Garden Earth.
The biggest cultural change for attaining "sustainability" is metaphoric, from extraction to management. Maybe somewhat ironically, proponents should go all Old Testament. Stewards of the Earth and so forth.
Many of our prior cultures had at least some form of this. I don't know when or why we stopped being so. Maybe due to the Enlightenment and then Industrialization.
I vividly remember reading René Descartes as a kid and being shocked by his violent language and metaphors. Stuff like "We must wrest Nature's secrets and make her submit to our will" (paraphrasing, from memory).
If we can get good enough at bioengineering, a 50,000 year flight to another star using conventional propulsion might not be such a big deal.
The seeming requirement of FTL to explore the universe is 100% a function of our short life span. If we can't make spacecraft go faster we have to make ourselves last longer.
Food can be recycled pretty effectively, and if were that good at biotech I assume we could improve on the current state of the art.
They already recycle water very effectively on the ISS. It's the machine that "turns yesterday's coffee into today's coffee."
Of course if we were that good at biotech we could probably hibernate a good chunk of the flight time too. Might be necessary to wake up periodically to reset the body, but you could probably hibernate most of the duration. Maybe you'd do it with some kind of weird circadian cycle with extremely elongated sleep periods, sleeping like 10X-100X as long as you are awake. During each wake period you check to make sure everything is working properly.
You would not need a Bussard ramjet for the long duration flights I'm thinking of. A nuclear thermal rocket could get you a good deal past solar system escape velocity. Nuclear pulse propulsion could get you up to at least single digit percentages of the speed of light if you didn't mind a little boom-boom. Then you just cruse along on an interstellar transfer orbit until you do a retro-burn to enter the destination star system a few tens of thousands of years later. These are all technologies that are already feasible at least on paper. No new physics is needed.
We would only small amounts of food, if we could efficiently recycle it. Right now, we use plants/animals and solar energy to upcycle our waste products into food. However, there are no physical reasons that we couldn’t use electricity and managed bioreactors to do that instead.
I think the focus on ECS when talking about data-oriented design largely misses the point of what data-oriented design is all about. Focusing on a _code_ design pattern is the antithesis of _data_-oriented design. Data-oriented is about breaking away from the obsession with taxonomy, abstraction and world-modeling and moving towards the understanding that all software problems are data transformation problems.
It's that all games essentially (and most software in general) boil down to:
transform(input, current_state) -> output, new_state
Then, for some finite set of platforms and hardware there will be an optimal transform to accomplish this and it is our job as engineers to make "the code" approach this optimal transform.
> Data-oriented is about breaking away from the obsession with taxonomy, abstraction and world-modeling
Something about this does not sit well with me.
Data is absolutely worthless if it generated on top of a garbage schema. Having poor modeling is catastrophic to any complex software project, and will be the root of all evil downstream.
In my view, the principal reason people hate SQL is because no one took the time to "build the world" and consult with the business experts to verify if their model was well-aligned with reality (i.e. the schema is a dumpster fire). As a consequence, recursive queries and other abominations are required to obtain meaningful business insights. If you took the time to listen to the business explain the complex journey that - for instance - user email addresses went down, you may have decided to model them in their own table rather than as a dumb string fact on the Customers table with zero historization potential.
Imagine if you could go back in time and undo all those little fuck ups in your schemas. With the power of experience and planning ahead, you can do the 2nd best thing.
You're right, when I mentioned "taxonomy, abstraction and world-modeling" I meant as it pertains to code organization in the tradition OOP/OOD sense where it's generally about naming classes, creating inheritance hierarchies, etc. Data-oriented design is _absolutely_ concerned with the data schema. I would, however, disagree that the focus should be on "building the world" with your schema. To me this means creating the schema based off of some gut/fuzzy feeling you get when the names of things all end up being real world nouns. To me creating a good schema is less about world building than it is about having the exact data that you need, well normalized and in a format that works well with the algorithm you want to apply to it.
I don't think ECS necessarily means a "build the world" approach. I think it's best kept at the level of a data structure with some given set of operations: create entity, destroy entity, add component to entity, remove component from entity, get component on entity, and the big one -- query / iterate through component combinations on all entities that have them.
Just like arrays and structs, it's yet another data structure to be used in the general data-oriented approach, one that becomes useful because those creation / destruction patterns come up in games and adding and removing components is a great way to express runtime behavior as well as explore gameplay.
The "focus" on ECS may just come from it being an interesting space as of late vs. arrays, structs and for loops that have been around for ever, but it's mostly just an acknowledgement of common array, struct and for loop patterns that arise. There's also a lot out there about the systems part and scheduling and event handling but I think it's almost best to start out with simple procedural code (that then has access to the aforementioned data structure) and let patterns collect pertinent to the game in question.
One big aspect I personally dig is if you establish an entity / data schema you get scene saving, undo / redo, blueprint / prefab systems that are all quite useful and basically necessary if you want to collaborate with artists and game designers on a content-based game, and empowers them to express large spaces of possibilities without editing the code.
People love SQL because it is truly an incredibly bad language. Poor to no ability to abstract, no composability, a grammar so convoluted it makes C++ look logical, and so on. The relational model is a beautiful thing but its power is obscured by how awful the main gateway to it is.
Schema design is THE problem data oriented programming is focused on. It's saying, let's design our data structures in memory and on disk such that they exist to solve the problem at hand. I think youre talking about the same thing
Or to circle.back again to Fred Brooks, time and time again:
"Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowcharts; they'll be obvious."
> moving towards the understanding that all software problems are data transformation problems.
But this understanding is fundamentally, deeply wrong, in the same way that civil engineering based approaches to software engineering are wrong for most software applications.
That is: yes, all software systems are data transformation systems, but most software problems are not “how do I produce the system most narrowly tailored to the present requirements” but more often “how to engineer a system for success with the pace and kind of change that we can expect over time in this space”.
(Now, games, particularly, are both pushing the limits of hardware and fairly static, so making them narrowly-tailored, poorly adaptable static works is often not wrong. But that doesn't generalize to all, or even most, software.)
That is how you think about the software you write. System evolution is only one aspect. Most patterned OO codebases I have come across were *not* engineered for evolution. Sure there were some classes you could implement or replace, but the complexity was not paid back later.
Design principles can be applied to all implementation mechanisms.
This fits my gut feeling every time I see an ECS system that videogame design has gotten stuck in a local maxima abstraction/pattern. Often what they really want is a Monadic abstraction of a data/state transformation process, but they are often stuck in languages ("for performance reasons") that make it hard to impossible to get good Monadic abstractions. So instead they use the hammers that to make nails of the abstractions that they can get. ECS feels to me like a strange attempt to build Self-like OO dynamic prototypes in a class-based OO language, and that's almost exactly what you would expect for an industry only just now taking baby steps outside of C/C++.
C# has some good tools to head towards that direction (async/await is a powerful Monadic transformer, for instance; it's not a generic enough transformer on its own of course, but an interesting start), but as this article points out most videogames work in C# today still has to keep the back foot in C/C++ land at all times and C/C++ mentalities are still going to clip the wings of abstraction work.
(ETA: Local maxima are still useful of course! Just that I'd like to point out that they can also be a trap.)
The quotes imply that this is a bad reason, but in soft realtime systems you often want complete control of memory allocation.
Even in the case of something like Unity--in order to give developers the performance they want--they've designed subset of C# they call high performance C# where memory is manually allocated.
In most cases if you're using an ECS, it's because you care so much about performance that you want to organize most of your data around cache locality. If you don't care about performance, something like the classic Unity Game Object component architecture is a lot easier to work with.
Yea, you're right, I think the previous poster seriouspy underestimates videogames as performance critical (and performance consistent!) apps. In the modern days of desktop Java and C# (and even more in web dev) the vast majority of coders just don't come across the need to "do everything you need to do" in 33ms or less, consistently.
I'm not implying it is a bad reason with the quotes, I'm trying to imply that it is a misguided reason (even if it has good intentions).
The "rule" that C/C++ is always "more performant" is just wrong. It's a bit of a sunk cost fallacy that because the games industry has a lot of (constantly reinvented) experience in performance optimizing C/C++ that they can't get the same or better benefits if they used better languages and higher abstractions. (It's the exact same sunk cost fallacy that a previous games industry generation said C/C++ would never beat hand-tuned Assembly and it wasn't worth trying.)
In Enterprise day jobs I've seen a ton of "high performance" C# with regular garbage collection. Performance optimizing C# and garbage collection is a different art than performance optimizing manually allocated memory code, but it is an art/science that exists. I've even seen some very high performance games written entirely in C# and not "high performance C#" but the real thing with honest garbage collection.
(It's a different art to performance optimize C# code but it isn't even that different, at a high level a lot of the techniques are very similar like knowing when to use shared pools or deciding when you can entirely stack allocate a structure instead of pushing it elsewhere in memory, etc.)
The implication in the discussion above is that a possible huge sweet spot for a lot of game development would actually be a language a lot more like Haskell, if not just Haskell. A lot of the "ECS" abstraction boils away into the ether if you have proper Monads and a nice do-notation for working with them. You'd get something of the best of both worlds that you could write what looks like the usual imperative code games have "always" been written in, but with the additional power of a higher abstraction and more complex combinators than what are often written by hand (many, many times over) in ECS systems.
So far I've not seen any production videogame even flirt with a language like Haskell. It clearly doesn't look anything like C/C++ so there's no imagination for how performant it might actually be to write a game in it (outside of hobbyist toys). But there are High Frequency Trading companies out there using Haskell in production. It can clearly hit some strong performant numbers. The art to doing so is even more different from C/C++ than C#'s is, but it exists and there are experts out there doing it.
Performance is a good reason to do things, but I think the videogames industry tends to especially lean on "performance" as a crutch to avoid learning new things. I think as an industry there's a lot of reason to avoid engaging more experts and expertise in programming languages and their performance optimization methodologies when it is far easier to train "passionate" teens extremely over-simplified (and generally wrong) maxims like "C++ will always be more performant than C#" than to keep up with the actual state of the art. I think the games industry is happiest, for a number of reasons, not exploring better options outside of local maxima and "performance" is an easily available excuse.
I think you may be indexing a bit into your exposure and painting the industry in a broad brush.
I've seen impressive things done with Lua, from literate AI programming with coroutines to building compostable component based language constructs instead of standard OOP. You have things like GOAL[1] which ran on crazy small systems (the Lua I saw ran in a 400kb block as well).
On performance, data oriented design and efficient use of caches is the way you get faster. I've done it in Java, I've done it in C#, I've done it in Rust and C++. Certain languages have better primitives for data layout and so you see gamedev index into them. We used to do things like "in-place seek-free" loading where an object was directly serialized to disk and pointers were written as offsets that were fixed up post load. Techniques like this easily net 10-30x performance benefits. It's the same reason database engines run circles around standard language constructs.
You are correct that I am using a broad brush, but so far it is a broad brush sort of conversation. I realize I'm a (bitter?) cynic at this point and don't have a lot of respect for the videogames industry as a whole from a technical perspective, because it is an industry that prides itself on reinventing wheels, not sharing a lot of efforts between projects, and not trusting nor retaining expertise in the long run. I realize there are a lot of great and novel approaches such as the ones you mention (I appreciate that), but so much of the novelty is siloed and a lot of what I see as a certain kind of outsider is the tiniest slices of things that escaped the siloes such as Unity and Unreal. I realize they aren't accurate to the state of the technical art in some siloes, but these days given the number of games using one or the other of those two common engines today it certainly reflects the "state of the technical median".
> It's the exact same sunk cost fallacy that a previous games industry generation said C/C++ would never beat hand-tuned Assembly and it wasn't worth trying.
C/C++ hasn't beat the performance of hand-tuned assembly - it has simply gotten close enough that the cost of hand-tuned assembly is not worth it in most cases.
Seconding this - some esoteric JVM garbage collector tuning is required to build a high performance Java (or Clojure, etc.) system, but it can be done.
It's arguably significantly less work to learn how to tune the GC and then optimizing it for your situation than it is to deal with manual memory allocation and all of its fallout.
If memory serves, you (only) need two or three times the memory for GC which work well(low pause) than for manual allocation --> I'm surprised that developers of PC games didn't switch to GCs 'en masse'..
And no, I'm not joking: I work in C++ and I know exactly how annoying memory errors can be.. Thanks a lot valgrind|ASAN developers!
If your optimization goal is "use the least memory possible," then sure, manual memory allocation is the way to go. I was addressing a different optimization goal: the "high performance" case, meaning approximately "high throughput, low latency operation."
There is a common misconception that GC invariably precludes the construction of a "high performance" system, which is not true. If your use case allows you to not care as much about larger memory consumption -- 2x to 3x does seem like a reasonable first approximation of "larger" -- then GC is indeed a viable option for building "high performance" systems.
This case is not uncommon. Not everyone is targeting a memory constrained console or embedded system.
In many (though of course not all) cases, the tradeoff is well worth it -- consume more memory at runtime, spend some time tuning the GC, and in exchange developers can ship a product faster, by having to spend significantly less time dealing with manual memory allocation.
>If your optimization goal is "use the least memory possible," then sure, manual memory allocation is the way to go. I was addressing a different optimization goal: the "high performance" case, meaning approximately "high throughput, low latency operation."
Ignoring the amount of memory used, GC tuning a managed language doesn't give you the flexibility to control memory layout needed for maximum cache locality.
>If your use case allows you to not care as much about larger memory consumption -- 2x to 3x does seem like a reasonable first approximation of "larger" -- then GC is indeed a viable option for building "high performance" systems.
Not ignoring amount of memory used. In the context of this thread--video games specifically "high performance" video games--2x to 3x is almost never going to be acceptable.
I can tell you why this switch didn't happen: 2x to 3x the memory usage is just absolutely abysmal for a process that is barely fitting into memory as it is. Most of the games that run up against these constraints are multiplatform titles targeting consoles that are notoriously stingy with main memory to reduce cost.
>The "rule" that C/C++ is always "more performant" is just wrong.
Who said it's a rule? What C/C++ gets you is the ability to manually allocate memory without jumping through hoops.
> Performance optimizing C# and garbage collection is a different art than performance optimizing manually allocated memory code, but it is an art/science that exists. I've even seen some very high performance games written entirely in C# and not "high performance C#" but the real thing with honest garbage collection.
Performance optimizing C# with garbage collection for high performance soft realtime systems (I've done it) relies on tricks like object pooling to avoid triggering GC along with avoiding many of the more advanced language features. Even then you don't get the same level of control. I'm also almost completely certain that the high performance C# games you're talking about aren't using C# for the engine, but feel free to provide examples so I can take a look.
If your game (or parts of your game) doesn't need the performance that comes with a higher degree of memory layout control, then by all means use whatever tools you want to.
I've written game logic in C#, F#, Ruby, Haxe, Python, Lua, Java, JavaScript and Elixir.
>The implication in the discussion above is that a possible huge sweet spot for a lot of game development would actually be a language a lot more like Haskell, if not just Haskell.
There almost certainly is for game logic. Many modern game engines provide higher level scripting languages.
However, if what you are working on is in that sweet spot, you likely didn't need an ECS to begin with and a classic component architecture would have probably been a lot easier to deal with.
>But there are High Frequency Trading companies out there using Haskell in production.
HFT is not game dev. "Performance" in HFT doesn't mean the same thing as performance in games.
I haven't used Haskell specifically, but I've toyed with using Elixir for gamedev. It's reliance on linked lists makes it extremely difficult to iterate quickly enough. There are work arounds of course, but the work arounds remove most of what is nice about Elixir in the first place.
>Performance is a good reason to do things, but I think the videogames industry tends to especially lean on "performance" as a crutch to avoid learning new things. I think as an industry there's a lot of reason to avoid engaging more experts and expertise in programming languages and their performance optimization methodologies when it is far easier to train "passionate" teens extremely over-simplified (and generally wrong) maxims like "C++ will always be more performant than C#" than to keep up with the actual state of the art. I think the games industry is happiest, for a number of reasons, not exploring better options outside of local maxima and "performance" is an easily available excuse.
The average engine coder writing high performance code in C++ isn't a "passionate teen". They are experienced software engineers who want to stick as close to the metal as they feasibly can.
The games industry (outside of AAA games) also has an extremely low barrier to entry, and it's something that nearly every programmer has thought about doing at some point--if Haskell turns out to be a fantastic language for making games, it will almost certainly happen sooner or later.
> The average engine coder writing high performance code in C++ isn't a "passionate teen". They are experienced software engineers who want to stick as close to the metal as they feasibly can.
Statistically the median age in the games industry is 25 and always has been. It's a perpetually young industry not known for retaining experienced talent. I know that statistically the median doesn't tell you a lot about how long of a tail there is of senior talent, you need the standard deviation for that, but given what I've seen as mostly an outside observer with a strong interest the burn out rate in the industry remains as high as ever and senior developers with decades of experience are most likely to be an anomaly and an exception that proves the rule than a reality. In terms of anecdata all of the senior software developers I've ever followed the careers of on blogs and/or LinkedIn are all in management positions or entirely different industries after 30. I realize my sample size is biased by the people I chose to follow (for whichever reason) and anecdata is not data, but statistically it's really hard for me to square "experienced software engineers" with "in practice, it looks like no one over 30".
>Statistically the median age in the games industry is 25 and always has been.
Where are you getting this information from? The only hard data I can find is from self selected survey responses, but this survey from IGDA shows only 10% of employed game developers are under 25 [1]. My guess is that (as you've acknowledged is possible) there's some serious selection bias going on. You said you have an interest in burn out rate, so I'm guessing you're more likely to follow/notice game devs who discuss this topic. This group is more likely to be suffering from burn out I'd wager.
Another poster already mentioned that engine devs (the one's writing most of the C++) tend to be older than the industry average.
In game dev, there has been a really serious split between engine development and game (logic and content) development. Most of the talented and experienced programmers seem to drift towards engine development. That's where the hard problems are and where these guys can have the most impact. As a bonus, engine development cycles are not so closely coupled to game release dates anymore, so crunch is less of an issue in engine teams.
> Focusing on a _code_ design pattern is the antithesis of _data_-oriented design
Doesn't the former enable the latter? Ideally, language (both human and machine) would have the semantics needed to represent all transforms, but that's not the case. Code you rely on, since none of it is written in isolation, needs to enable you to implement data-oriented design should you so choose.
Also, I don't think pointing out that 'all games are essentially...' is particularly useful. It's true, no question, but that doesn't mean it's the most useful mental model for people to use when developing software. Our job as engineers is to make software that functions according to some set of desires, and those desires may directly conflict with approaching an optimal transform.
Not necessarily. ECS is a local maxima when developing a general purpose game engine. Since it's general purpose it can do nothing more than provide a lowest common denominator interface that can be used to make any game. If you are building a game from scratch why would you limit yourself to a lowest common denominator interface when there's no need? Just write the exact concrete code that needs to be there to solve the problem.
> Our job as engineers is to make software that functions according to some set of desires, and those desires may directly conflict with approaching an optimal transform.
All runtime desires of the software must be encoded in the transform. So no software functionality should get in the way of approaching the optimal transform. What does get in the way of approaching the optimal transform is code organization, architecture and abstraction that is non-essential to performing the transform.
> Just write the exact concrete code that needs to be there to solve the problem.
Good luck with that when the exact code to solve the problem is not the exact code the next week, because the problem has changed or evolved.
Not to suggest an ECS is the answer, but this line of thinking is reductive to the realities of creating a piece of art. It's not a spec you can draw a diagram for and trust will be basically the same. It's a creature you discover, revealing more of itself over time. The popularity of the ECS is because it provides accessible composition. It's not the only way of composing data but being able to say "AddX", "RemoveX" without the implementation details of what struct holds what data and what groupings might matter is what makes it appealing.
I think there’s two orthogonal things being conflated by you. Flexibility of a solution and how general the solution is.
What you’re basically saying is a solution should be flexible to change because making a game requires trial and error. I totally agree with that.
Using a general solution is one path to flexibility but it does come with a cost associated. It’s flexibility built on a tower of complexity and if you look at a modern ECS implementation that is performant it’s actually quite a lot of complexity. You’re also reducing flexibility in the sense that these sort of solutions generally have preferred patterns you need to fit your game design into. So you end up introducing a learning, maintainance and conceptual burden into the project you might not need.
OTOH if you have a specific problem you can write a specific solution for you will end up with less code, hopefully in a conceptually coherent form. That in itself offers flexibility. Simple code you can easily replace is often more flexible than complex code you need to coax into a new form.
The key is to recognise whether your problem is specific or general you need flexibility.
These architectural patterns are fun to argue over and obsessed over by armchair game developers but are a trap if you’re trying to make a game rather than a general purpose game engine.
Which isn’t to say you don’t want some framework underlying things for all sorts of mundane reasons. But most games could get away with that being an entity type that gets specialised rather than anything more complex.
> I think there’s two orthogonal things being conflated by you.
> It’s flexibility built on a tower of complexity
Agreed with the 'flexibility on a tower of complexity', 100%! :) was trying to not appear too dogmatic by describing is as 'accessible composition'; generally any solution that is 'accessible' is also broad enough that it has as many flaws as benefits, and an ECS definitely isn't an exception.
> These architectural patterns are fun to argue over and obsessed over by armchair game developers but are a trap if you’re trying to make a game rather than a general purpose game engine.
Again, agreed. Speaking from experience as an iterator and rapid prototyper who has used an ECS for years, and has been bitten by the complexity but hasn't been able to beat the flexibility of being able to just write something like `entity->Add<ScaleAnimation>(...)`, `entity->Add<DestroyAfter>(...)`, `entity->Add<Autotranslate>(...)`, `entity->Add<Sprite>(...)` to be quickly and easily create a thing that looks nice, pops in smoothly, moves effortlessly, destroys itself thoughlessly. It lets you move between ideas quickly and then you can pivot to addressing concerns if any show up.
Yeah for sure, I love a good composable approach to entity creation as well particularly when it’s specified in data rather than code. The basic framework for getting that going is extremely lightweight which is fantastic.
> If you are building a game from scratch why would you limit yourself to a lowest common denominator interface when there's no need?
There is a need: the limits of the human mind. Nobody can model an entire (worthwhile) game in their head, so unless you plan on recursively rewriting the entire program as each new new oversight pops up, you aren't going to get anywhere near optimal anyway.
> If you are building a game from scratch why would you limit yourself to a lowest common denominator interface when there's no need? Just write the exact concrete code that needs to be there to solve the problem.
Coming from the realm of someone who has mostly swam in the OO pool their career, I struggle understanding how a concrete implementation of something like a video game wouldn't spiral out of control quickly without significant organization and some amount of abstraction overhead. That said, I have found ECS type systems be so general purpose that you end up doing a lot of things to please the ECS design itself than you do focusing on the implementation.
Do you have any examples of games and/or code that are written in more of a data oriented way? I'd really love to learn more about this approach.
While stylistically I don't necessarily agree with him all the time, Casey Muratori's Handmade Hero (https://handmadehero.org/) is probably the most complete resource in terms of videos and access to source code as far as an 'example' goes.
It's really annoying how many people misunderstand the term 'data oriented design'. Usually to mean something like 'not object oriented programming'. If your data was inherently hierarchical and talking about animals that meow or moo, go ahead and implement the textbook OO modeling.
I do think that in practice a hierarchical ontology like that is still best not modeled as a language-level hierarchy because the language inheritance / hierarchy concepts are often not the exact semantic you want, IME. Esp. if it then ties to static types, since you then can't change the hierarchy at runtime. I think even with a data-oriented approach to a hierarchy -- your code isn't necessarily hierarchically organized, it just handles data that happens to express a hierarchy. And you want to be in control of the semantics of said hierarchy with more freedom (and explicitness) than the language-level hierarchy gives you -- so you want your own code that interprets the hierarchy expressed in the data and performs your desired semantics. This also allows artists and narrative or gameplay / level designers to go see and edit the hierarchy and add elements to it.
An example is the prefab hierarchy you get in Unity, which is expressed through the data (prefabs and their relationships). (Note: I mean specifically the prefab inheritance hierarchy, not the transform spatial hierarchy -- the former has more overlap with the "is a" relationships). The code processing this hierarchy could've just been plain C code that parses the files and maintains an in-memory set of structures about them, even. You then get to define how properties inherit, what overriding means, etc. yourself.
Totally agreed. The fact that some of these concepts are embedded into the language design (say for C++) are minor conveniences at best - when they almost perfectly line up with the data you have - but just get in the way most of the time.
"Alabama prisoners' organs vanish, and there's a whole lot of passing the buck" could easily be "Deceased prisoners' organs taken by University of Alabama against families wishes" but that's not going to get the clicks.