I also worked on a number of Flash projects in its heyday. I agree that there aren’t really any close equivalents to its feature set today, but there are some tools like Rive and Lottie that I’d consider modern day reimaginings for many multimedia workflows.
As a PM, I factor this into prioritization. An engineer passionate for a product will lead to better engineering output, increased morale, and feeling of being heard. A motivated, bought-in engineer team is important when it comes to building the ‘high impact’ products.
Prioritization isn’t always black and white.
These qualitative factors matter and shouldn’t be ignored. As always, you weight it against other trade offs.
I’ve been a pm Eng and designer and this sort of patronizing attitude sucks.
Look at the end of the day you should be cultivating fellow thought leaders because when you grow up you learn your priorities are more often than not just your own egotistical nonsense and wrong. But you have a lot of phrases to cut others down.
Sure there were some buzz words in there... But the actual core of the post wasn't patronising at all. Might need to take a step back and ask why your response to that was so strong.
Some people don't realize the value of something unless you show it to them. It's a risk for sure but honestly it keeps me sane vs trying to get 10 people aligned before starting something and then running out of time.
People will happily take credit for your work after it works.
yep, an engineer has the power to directly influence the code. This is a strong power.
Sometimes just making a PR is enough and a good convincer in and of itself.
Use sparingly of course, weigh in time for making the argument, but this is an artifact just as a convincing research, text or a plot. code can be part of the argument.
I once worked on an application that integrated with a third party api. The way it did this was with a large and horrible client library that used a separate db to cache the data.
The data was then fetched from the main application and used to rebuild the pages (in the main db) based on this data once a day.
The library had lots of problems, and one day it stopped working. I was tasked with fixing it - we had the source code, it was purchased from someone and copied into the repo. I spent most of the week if not more trying to figure out what was wrong, but I couldn't. What I did learn was that this library was some of the worst most pointless code I'd ever seen.
So I told the team that I think I can rebuild this thing from scratch faster than I can fix this bug. The intermediate db is pointless and most of the library code is dead, the rest is garbage. I can make a simple little thing that does exactly what we need better, faster and easier.
Nope. No bueno, fix what we have. So I spent a few hours over the weekend, less than a workday, building the new solution. Come Monday I had it pretty much working, a few things needed to be done but it already supported the use case. The pages were built correctly, they had the necessary content but some things were a bit messed up, nothing difficult to fix.
Showed it to the team, said I want to use this and delete the old stuff - nope.
The only half-decent explanation I got was that the client had paid a way too high amount of money for this garbage library and I guess the team lead didn't want to tell them we wanted to throw it out or something like that.
Sigh. I worked at a shop that was spending months waterfalling a frontend to some background API calls. I finally got annoyed enough to spend a weekend actually implementing the thing as a Django app. There. Done.
I got my ass handed to me by management for not going through the proper processes.
I learned something that day: I never want to work somewhere that engineers serve the processes and not the reverse. There are some that are good and necessary: like “thou shalt deploy via CD and not SSH into prod to edit code”. There are others that only exist to serve bureaucracy, and those try my patience.
Yeah, depending on the particulars of a system. If you're at a startup and report to the CTO, that might be perfectly fine in an emergency. At a company with a few million users, almost certainly not. There's a spectrum of possibilities.
In an emergency, that sort of thing even happens at Google (more for their smaller services, and almost always in the form of auto-LGTM hot-fixes bypassing the normal checks rather than actual live-editing of a script or binary, but even that latter thing happens occasionally). There are checks and controls, but an emergency for a billion users is a big deal.
"I spent most of the week if not more trying to figure out what was wrong, but I couldn't. What I did learn was that this library was some of the worst most pointless code I'd ever seen."
I would probably be skeptical if somebody made these statements. You don't know what's wrong and you declare the code to be pointless. Maybe you put a good effort into it but I have heard it too many times that somebody declared "this is all crap. we need a rewrite". Most times they just didn't put the effort into understanding the current code. And usually the time to get to "pretty much working" is often only a fraction of what it takes to "totally done".
The problem was not that I did not understand the code. I understood it just fine, it wasn't complicated it was just bad and old. All it did was get some data from an api, change it somewhat, store it in a db. Then a scheduled job would call a method which would get that data, change it a bit more and return it where it would be changed yet a bit more and stored as pages for the main web app.
There was no reason all these data mutations couldn't have been in one place instead of all over the place. There was no reason to store it in one db then get it from that one just to store it in another db. Someone said the third party api was slow and unreliable but I don't see how that's relevant - if the api is down then you don't get updated data, it doesn't matter if we have outdated data in an intermediate db. We already have that outdated data in our main db and we'll get updates when the api starts working again. During testing I had absolutely no issues with the performance of the api, it transferred all the data we needed in a completely acceptable amount of time, and this was just in a nightly scheduled job anyway so if it had taken a minute that would have been fine as well. But it didn't, it responded in milliseconds. I never noticed any unreliability on their side either, but if it had been unreliable that would have been totally fine. The app just wouldn't have gotten updates until it started responding again. Nothing can solve that problem.
I honestly can't remember what the actual problem was or how I fixed it in the end. The code had been in production for years and only received the minimum necessary amount of changes. Some dependency or something probably broke from years of nobody wanting to touch that huge piece of crap.
But that's not why I say it was bad and pointless. It was bad because whoever wrote it didn't know about libraries for xml parsing and had implemented all of the parsing from scratch with string operations. We're not talking about real parsing here with lexers and tokenizers and stuff. We're talking about what you might expect if you gave a mediocre first year CS student the task of parsing some specific xml. The db interaction was similarly overcomplicated and outdated, and the code itself was sloppy and full of old messes nobody had bothered to clean up.
All it did was get some data, store it and make it available through some method calls, and for that there were like 50k loc most of which was dead and most of the code still in use was that monstrosity of a homerolled xml parser.
The things left to do on my new solution were trivial. Some of the columns had html tags and stuff like that in them, it just needed to be cleaned out where necessary. Some other stuff needed to be modified a bit. I did not skip it because it was hard, I skipped it because it was tedious and I didn't want to spend all the effort before I got the green light, which turned out to be a great decision because it didn't get the green light. And they probably still to this day waste man-hours on keeping that piece of crap running.
I guess the correct way to present this is something like "I know how to fix this in the short term but we should consider simplifying things because as far as I can tell the current code is much more complex than it needs to be".
I don't know the exact situation but I just wanted to point out not to fall into the "I have looked at the thing for a little. I don't understand it and I can't be bothered to understand it because whoever write it, was an idiot. We need a full rewrite with my favorite shiny tool. The rewrite will be easy" trap. I think that triggers a lot experienced people.
But maybe you are right. That's also very possible
I've been on both sides of this table multiple times, as the IC and as the Manager of an eager IC. Here's a list of all the reasons why I as your manager would also flat-out say No to this situation. (These are of course heavily tainted by my own recent experience of trying to coach a mid-level dev through a very similar problem)
- "Pretty much working" means all the fun stuff is done and the actual hard thing is left to wrap up. It's a useless estimate that only accounts for your coding work, which is usually the smallest amount of work performed on an integration feature like this.
- It's a rewrite so we've gotta do a full regression test on every piece of data that thing pulls back. Since it's old functionality it's not fully covered by our automated tests, so this goes to QA. Our QA team is overloaded so this unauthorized, not on the roadmap project now needs to jockey for priority with things that Marketing is literally making artifacts for _today_.
- "It's already built" isn't really a justification for a priority change, so now I'm in the awkward position of changing priorities for a non-roadmap task and justifying this to every single stakeholder who is respecting the process, or telling you it'll be 2 months minimum before QA can even think about it. Either way no one is happy and now I have to worry about you going rogue again and trying to work channels around me to get this thing shipped out of band.
- It's a full rewrite and going through manual QA, so it's nearly guaranteed that critical, but undocumented business rule fixes were missed. Somewhere in that library is a weird function holding up the world, but it was "obviously cruft" and left out. There's a good chance we won't find the issue until it has already polluted a ton of Prod data. That's why I won't let you do Developer QA. You've only been here a year and this service predates you, me, and the rest of the team, we literally have no context.
- If the client finds out we did a full rewrite, they too are going to do a full regression test on their end. Do you know the size of the shitstorm this is going to bring on us? Every single question, problem, feature change, bug, enhancement, communication, _everything_ we went through over the last XX years since we built this integration is going to resurface. I get re-litigate every. single. thing. "Since you're working on our integration can we get XX, XX, and XXXX?" (each is a sprints worth of dev time minimum), "YYY isn't working, did you guys break it again?" (it's always been broken but now someone gets to spend 3 hours in Datadog pulling logs to prove this).
- I've been using the "Rewrite This Library" and "Refactor That Service" projects as leverage to negotiate for more budget to bring on 2 more headcount so that we could actually do those rewrites with proper time and space. You talking about getting 80% done over a weekend has completely undermined the work I've put into this effort, and at the same time didn't remove the Refactor issue from my backlog. Now I will essentially have to shit-talk you in my own 1-on-1s in order to regain lost ground. "sfn42 is a decent developer but he just doesn't have a lot of context to what's happening outside his role. Needs more time in the oven before he gets the bump to Sr. Maybe I can pull him into more planning meetings so he can start growing in this area" -- congrats you just got invited to 6 hours of meetings a month regarding work you won't perform.
- In 6 months when our team is planning out some future work that's just way too much for the headcount & timeline we have, and you bring up "we could really use another Sr. Dev or two, any word on our headcount request?", I might reply politely with a "still no word if we can pull that off this quarter", but internally I'm wondering if the pain of bringing a new dev up to speed is less than the pain of working with you.
- Lastly, the most petulant reason, you were told No last week. I'm sorry you lost a weekend to this, but a No is a No and I need you to understand that. Other things are happening at this company outside the scope of your purview.
Again, this is all drawn from my own experience. I had a mid-level dev show me a huge refactor he started on the weekend. He was convinced it was almost done, "just a few small things left" is an exact quote. However I knew that this part was literally the smallest bit of the effort. I was seeing at least 3 months of work across 4 departments before it would actually be Done, in Production, and working to our satisfaction.
If I had the space I would normally be just fine letting the young fella just experience that pain. Make him do the scheduling, put him on point for everything, and just let him spin on it for a month or so. I did not have that time and space, so instead we spent a few hours white boarding out the rest of what needed to happen, and thankfully he mothballed his project of his own volition.
This reply exudes professionalism and experience in the real world of development where it's not just code leaping from a developer's fingertips into prod. I was going to reply myself, but you covered nearly everything I was going to. Cowboy Coders, please read it carefully and reflect on it seriously.
You could also ask the developer to write comprehensive documentation and test cases, not only for the new code but also for the older code, to ensure the new one can replicate the bugs higher level systems depend upon.
You have a lot of good points and some of it may have been applicable in my case.
But this was not complicated. I have underestimated refactors before, this was not one of those times. This was a simple little thing, just getting some data from point A to point B. It would have been easy to verify that the new solution generated the same pages (data in db tables) as the old one.
I didnt undermine anyone. I brought it up in a team meeting, I didn't take it to the department head. Sure I had been told no, but that no was based on the assumption that I was wrong about being able to replace it easily. My weekend coding was simply to prove that it could be done, which I did whether anyone believes me or not.
I really like your last paragraph. You didn't just say no, you walked through it with them and helped them see the problem. I am convinced that the only real reason this did not go through was because nobody else understood the problem. None of our team members had worked with that particular component, everyone were about as new as myself and dismissing my concerns without consideration. Most of all the team lead who hadn't written a line of code in decades and had absolutely no concern for code quality. The review mechanic in that team was push to test, have lead click through website to see if it seems to work, push to prod. Lead did not give a shit what the code was like. The quality of their projects reflected that.
We had over a dozen different apps and pretty much all of them were chock full of bad code written by unsupervised juniors on a tight deadline. All the apps used the same CMS in the backend but nearly all of them had a different frontend approach because they just let people pick whatever - one day you're working in Vue, today it's react, now it's angular, here's a svelte app, this ones just using jQuery and here's one just using vanilla JS. While I was there they let another guy start using a different CMS for another new app, because we didn't have enough problems with all the different js frameworks already, let's start using different backend frameworks too!
Hardly a single test suite anywhere except what I'd made. Everywhere I looked I found bugs and terrible code, every task I got I had to start by figuring out today's flavour of JS framework then try to understand how some junior using this project to pad their CV with the newest JS framework had mangled it together into a somewhat usable website and then how to make the changes I needed to make which 90% of the time was 29 times harder than it should have been because the entire thing was a complete mess hacked together asap and then duct taped, Jerry rigged and beaten with a hammer periodically over it's years of service.
I moved on from that team pretty quickly, and got into a different team much more in tune with my views. About a year later I was talking with two of my old team mates who had been somewhat annoyed with me and all my nagging about testing and code quality back then, at that point they had worked on some of my solutions and felt the benefits of the tests I'd created and the way I actually organized my code to make it easier to understand and work with. It took me by surprise when they flat out, unprompted just told me I was right. When I was working there I had a hard time because everyone disagreed with things I considered basic facts, I started to doubt myself. Luckily the next team was already doing all those things I wanted to do and more. Now I know that good code does exist, the methods I advocate do work, I wasn't just imagining things. That other team was just badly managed.
That's not to say everything I've ever done has been gold, I've made bad decisions and learned from them. But I stand by replacing that old integration library and I still don't believe in this "legacy code" mindset where changing some old pile of crap requires buy-in from multiple different stakeholders and so on.
I might get it when we're talking about large complex, business critical systems that really do require weeks or months of work just to replace a small part. But what I'm talking about is a small website developed in a matter of weeks that's hardly much more than a glorified pdf and where the code behind it has absolutely no business being as complex as it is. Even if my suggested change had broken some requirements they would have been quick to fix because the new code was clear and simple. And the worst case scenario would probably be some messed up formatting in a small article that hardly anyone is going to read anyway.
I guess it also depends on the size of the company and how big of an existing system you are working within. If you are at a decent sized company, then there is no such thing as "not a big deal.' I posted this[1] a while ago in response to a similar complaint about how difficult it is to just wing it in a big company, and I think it's also relevant to this thread.
I am very thankful that over the last few years I've built out the headroom on our team to chase "shiny" things that we know customers will want and that we (engineering) want but aren't exactly cookie cutter for our usual planning flow.
A lot of my biggest political successes as an engineer are just building something that I know is important and finding someone higher up who has always wanted it done but everyone tells them it's going to take multiple quarters and it never gets planned.
We need to do something, my manager thinks it is too complex and we do not have the time, I have not been able to convince him (I am another manager), and yesterday I told my guy ... if it takes you X days, just do it and we will tell him later. He will find out after the coup and post-facto I can always justify it "oh we had so many other things going on, we never got to talk about this".
And my goal is to show that its value is more than the effort we spend with the workaround.
Extrapolation is one of those ideas that’s not actually used in practice- at least I’ve yet to see it used in any games in any meaningful capacity.
It’s just far too complicated and requires custom logic while resulting in worse results than more straightforward options. Even for multiplayer games the “extrapolation” is often done by repeating input states and running the regular game loop.
I also wouldn’t equivocate the interpolation approach with extrapolation. With interpolation you interpolate between two valid states. With extrapolation you produce a potentially invalid state (ie a character that’s inside of a wall). The only work around for the latter issue is to perform a full game tick - at which point you’re no longer doing extrapolation.
> Extrapolation is one of those ideas that’s not actually used in practice
This is how VR frame doubling works, no? "Timewarp"/"Spacewarm"
Also I would think that a lot of netcode would be considered extrapolation. You'd extrapolate a peer's input or velocity (and perhaps clean it up with further local simulation) and then deal with mis-prediction when changes are replicated.
For the former, Timewarp is used at an OS level to perturb the visibly rendered quad to match the display time orientation. There’s no extrapolation: the rendered frame is simply adjusted to account for the change in headset orientation.
For the latter, as I mentioned, the extrapolation is not on velocity: you still compute regular game ticks but by holding the input constant. This is quite different from extrapolating velocities.
> For the latter, as I mentioned, the extrapolation is not on velocity: you still compute regular game ticks but by holding the input constant. This is quite different from extrapolating velocities.
Replicating velocity is fairly common. Unreal's character movement replicates velocity and not inputs. I would personally argue that even doing a full game tick with replicated velocities is extrapolation. I'm not sure what the distinction would be or what counts as a full tick with error correction vs local extrapolation per tick with error correction.
I agree- what’s the difference between error correction and a full tick? At what point do you draw the line on error correction?
Extrapolation is often used to mean extrapolating values without error correction, at which point the results are less than stellar.
Spacewarp is, like Timewarp, a way to match the render frame time on a headset but by creating a warp of the output image; ill concede that this is technically extrapolation but is far away from whats generally referred to in describing updating entity values in game loops.
The main reason to prefer interpolation is that your fixed time step function does not need to operate on variable time ever- removing a complicated dependency.
For instance, modifying character accelerations based on a fixed time step constant is far more straightforward than the methods required to work with variable time deltas (due to floating point accumulated error). This is why any action-based deterministic game (think platformers, shooters, physics based games) will opt for this.
IMO it is much more straightforward to have a render method that pre-allocates some extra memory, interpolates a handful of values and renders them vs the nondeterminism introduced in a game logic method that has to take into account variable time (especially if also networking multiplayer state). And for this you trade off a frame of visual-only latency - a choice I’d take any day.
> modifying character accelerations based on a fixed frame constant is far more straightforward than the methods required to work with variable time deltas
For those who don't know - the reason this is hard is because of the different amounts of maths error that can accumulate between the two approaches (usually FP precision error). Doing some maths in 10 increments a tenth of the size will likely end up with a slightly different value than doing it once in a full size increment.
This is particularly important in multiplayer games where multiple players need to be able to do the same calculations and get the identical result. It is not good if the world begins to diverge just because you've got different frame rates!