The various projects that say something is deprecated but then don't give a removal timeline or keep delaying the removal (or even explicitly say it won't be removed, just remain deprecated) are the cause of this problem.
IMO, any deprecation should go in the following steps:
1. Decide that you want to deprecate the thing. This also includes steps on how to migrate away from the thing, what to use instead, and how to keep the existing behaviour if needed. This step would also decide on the overall timeline, starting with the decision and ending with the removal.
2. Make the code give out big warnings for the deprecation. If there's a standard build system, it should have support for deprecation warnings.
3. Break the build in an easy to fix way. If there is too much red tape to take one of the recommended steps, the old API is still there, just under a `deprecated` flag or path. Importantly, this means that at this step, 'fixing' the build doesn't require any change in dependencies or (big) change in code. This should be a one line change to make it work.
4. Remove the deprecated thing. This step is NOT optional! Actually remove it. Keep it part of your compiler / library / etc in a way to give an error but still delete it. Fixing the build now requires some custom code or extra dependency. It is no longer a trivial fix (as trivial as the previous step at least).
Honestly, the build system should provide the tools for this. Being able to say that some item is deprecated and should warn or it is deprecated and should only be accessible if a flag is set or it is removed and the error message should say "function foo() was removed in v1.4.5. Refer to the following link:..." instead of just "function foo() not found"
If the build system has the option to treat warnings as errors, it should also have the option to ignore specific warnings from being treated as such (so that package updates can still happen while CI keeps getting the warning). The warning itself shouldn't be ignored.
apt-get has a more stable interface and is more suitable for scripts and instructions intended to be followed to the letter.
apt is better for interactive use and by people who are not just blindly following instructions.
Here there are arguments for both. As commands intended to be copy-pasted in a terminal, using apt-get makes sense as it is the safest choice. But it is also intended for humans, it is not a script, so maybe apt would be better. To me, both ways make sense.
Rust's design eliminates data races completely. It also makes it much easier to write thread safe code from the start. Race conditions are possible but generally less of a thing compared to C++ (at least that's what I think).
Nothing is preventing you from writing correct C++ code. Rust is strictly less powerful (in terms of possible programs) than C++. The problem with C++ is that the easiest way to do anything is often the wrong way to do it. You might not even realize you are sharing a variable across threads and that it needs to be atomic.
Never use someone else's synthetic key as your primary key. If you want ordered keys, even if the API is giving out sequential integers, you should still use your own sequential IDs.
As written, it is UB, yes, but certainly in C++, and, I think, also in C, using a union is undefined behavior, too. I think (assuming isn’t and float to be of the same size) the main risk is that, if you do
union {
float f;
int i;
} foo;
foo.f = 3.14;
printf(“%x”, foo.i);
that the compiler can think the assignment to foo.f isn’t used anywhere, and thus can chose not to do it.
In C++, you have to use memmove (compilers can and often do recognize that idiom)
Pure js without typescript also has "types". Typescript doesn't give you nominal types either. It's only structural. So when you say that you "know it's already been processed", you just have a mental type of "Parsed" vs "Raw". With a type system, it's like you have a partner dedicated to tracking that. But without that, it doesn't mean you aren't doing any parsing or type tracking of your own.
Why does "true" parsing have to error out on the very first problem? It is more than possible (though maybe not easy) to keep parsing and collecting errors as they appear. Zod, as the given example in the post, does it.
Before parsing, the argument array contains both the flags to enable and disable the option. Validation would either throw an error or accept it as either enabled or disabled. But importantly, it wouldn't change the arguments. If the assumption is that the last option overwrites anything before it then the cli command is valid with the option disabled.
And now, correct behaviour relies on all the code using that option to always make the same assumption.
Parsing, on the other hand, would put create a new config where `option` is an enum - either enabled or disabled or not given. No confusion about multiple flags or anything. It provides a single view for the rest of the program of what the input config was.
Whether that parsing is done by a third party library or first party code, declaratively or imperatively, is besides the point.
https://pypi.org/project/pypyp/
It takes cares of the input and output boilerplate so you can focus on the actual code that you wanted python for.
reply