That seems to be related to a running executable, and the process of restarting the application to get it back to a known good state.
In my experience from software that is still being developed or has patches over time I commonly see where the initial specification is good. The problem comes in when that specification gets extended piecemeal and you run into 'the straw that broke the camels back' or 'A + B' is ok, but 'A + B + C' has far more failure modes because you're massively increasing the amount of system usage and testing that's necessary to validate the product.
For example adding more metrics to a piece of software. I've commonly seen this done by extending an existing table already in the DB. The developer tests the different workflows they see and implement the correct indexes. Then a month later some other seemingly unrelated feature gets added and when it tries to pull in some of those metrics in a report (or chart or whatever) the system falls over because you're doing a full table scan for joined data that someone missed.
In my experience from software that is still being developed or has patches over time I commonly see where the initial specification is good. The problem comes in when that specification gets extended piecemeal and you run into 'the straw that broke the camels back' or 'A + B' is ok, but 'A + B + C' has far more failure modes because you're massively increasing the amount of system usage and testing that's necessary to validate the product.
For example adding more metrics to a piece of software. I've commonly seen this done by extending an existing table already in the DB. The developer tests the different workflows they see and implement the correct indexes. Then a month later some other seemingly unrelated feature gets added and when it tries to pull in some of those metrics in a report (or chart or whatever) the system falls over because you're doing a full table scan for joined data that someone missed.