Using protobuf is practical enough in embedded. This person isn't the first and won't be the last. Way faster than JSON, way slower than C structs.
However protobuf is ridiculously interchangeable and there are serializers for every language. So you can get your interfaces fleshed out early in a project without having to worry that someone will have a hard time ingesting it later on.
Yes it's a pain how an empty array is a valid instance of every message type, but at least the fields that you remember to send are strongly typed. And field optionality gives you a fighting chance that your software can still speak to the unit that hasn't been updated in the field for the last five years.
On the embedded side, nanopb has worked well for us. I'm not missing having to hand maintain ad-hoc command parsers on the embedded side, nor working around quirks and bugs of those parsers on the desktop side
I feel like there's a semi-philosophical question somehwere here. Recursion is clearly a core computer science concept (see: many university courses, or even the low level implementation of many data structures), but it's surprisingly rare to see it in "day to day" code (i.e I probably don't write recursive code in a typical week, but I know it's in library code I depend on...)
But why do you think we live in a world that likes to hide recursion? Why is it common for tree data structure APIs to expose visitors, rather than expecting you write your own recursive depth/breadth-first tree traversal?
Is there something innate in human nature that makes recursion less comprehensible than looping? In my career I've met many programmers who don't 'do' recursion, but none who are scared of loops.
And to me the weird thing about it is, looping is just a specialized form of recursion, so if you can wrap your head around a for loop it means you already understand tail call recursion.
> but it's surprisingly rare to see it in "day to day" code
I rarely use them because I became tired of having to explain them to others, where I've never had to explain a simple while loop that accomplishes the same thing with, usually literally, a couple more lines of code.
From all of my experience, recursion is usually at the expense of clarity, and not needed.
I think it's related to the door memory effect [1]: you loose the sense of state when hopping into the function, even though it's itself.
Not "doing" recursion as a principle is often a sign the person has not been exposed to functional languages or relational kind of programming like Prolog. It often points at a lack of experience with what perhaps is not so mainstream.
Or the person is sensibly trying to make the code easier for other people to understand.
I am tech lead and architect for large financial systems written in Java but have done a bunch of Common Lisp and Clojure projects in the past. I will still avoid any recursion and as people to remove recursion from their PRs unless it is absolutely best way to get readable and verifiable code.
As a developer your job is not to look for intellectual rewards when writing code and your job is not to find elegant solutions to problems (although frequently elegant solutions are the best ones). Your job is taking responsibility for the reliability, performance and future maintenance of whatever you create.
In my experience there is nothing worse than having bright engineers on a project who don't understand they are creating for other less bright engineers who will be working with it after the bright engineer gets bored with the project and moves on to another green field, rewarding task.
The stack traces when something goes wrong are inscrutable under recursion. Same when looking at the program state using debuggers.
Fundamentally, the actual state of the program does not match the abstract state used when programming a recursive function. You are recursively solving subproblems, but when something goes wrong, it becomes very hard to reason about all the partial solutions within the whole problem.
> The stack traces when something goes wrong are inscrutable under recursion.
Hmm. This is a real issue, for the simple case. If tail recursion is not optimized correctly then you end up with a bunch of stack frames, wasted memory...
I propose partially this is a tooling issue, not a fundamental problem with recursion. For tail recursion the frames could be optimized away and a virtual counter employed.
For more complex cases, I'd argue it matters less. Saving on stack frames is still preferable, however this can be acheived with judicious use of inlining. But the looping construct is worse here, as you cannot see prior invocations of the loop, and so have generally no idea of program execution without resorting to log tracing, while even the inlined recusion will provide some idea of the previous execution path.
I don't agree with your last statement. I've been programming forever, and I understand recursion, and I use it, but I never equate it with loops in my mind (unless I'm deliberately thinking that way, like now) and I always find it more difficult to think about than loops.
The author is making a deliberate point about undefined behaviour in the article. Hence them not executing worked examples.
In fact, by not doing so they are making a subtle implicit statement that it is uninteresting to consider actually attempting to execute these snippets.
The third paragraph of the "P.S" of the article (you have to press submit to see it) is the one that really gives the game away.
More than implementation defined, for some you need context that simply isn't given. On the ones with mixed-type structs, even if you know what system it's compiled for you don't know if someone has used pragma pack 1 to byte pack the data instead of standard packing. Just seeing the struct, you still don't know.
I agree that in theory it would be cool to have C code that uses only defined behavior and works on all platforms for all eternity. However, I think most programs have a fairly clear understanding of what platforms (OS+arch) they are targeting and what compilers they are using to target those platforms.
If the compiler has defined behavior (and you have unit tests for that behavior) on all of these platforms, I don't think it is a huge deal. (Ideally you wouldn't... but sometimes its an accident or unavoidable)
As an example, while struct padding (problem 1) might not technically be in the spec, it is a cornerstone of FFI and every new compiler (that supports C FFI) has a way to compile structs with the same padding.
To my original point, if the article had instead given examples of compilers + architectures that produced different answers, I might feel differently. However, just saying mentioning that these weird edge cases are undefined (in the spec) doesn't mean much to me.
FWIW, 85k 4-input luts is huge by the standards of any softcore with "micro" in the name.
It comfortably surpasses the capacity of most of the Actel aerospace FPGAs that I tend to work with.
And I think it's so "micro" that the majority of all of Lattice's FPGAs wouldn't fit it either.
And the Lattice ECP5 is advertised with 85k LUTs, which would seemingly limit its use to edification as instantiating this softcore would consume the entire chip. For any other purpose if you wanted a chip that was only a CPU, you would buy a CPU.
The Lattice ECP5 must have been just an example, because in the Github repository there is another example about how to run Linux on it on a 33000 logic cell Artix FPGA board, so I assume that the core must take significantly less than 30000 cells.
Any PowerPC core, even a relatively small one like this, is much more powerful than the soft cores that are used in FPGAs when minimum size is desired.
There are also a few other bigger open-source POWER cores, which can be used for higher performance.
Out of interest, what could make such a design better than a traditional inductive motor? Such motors already suffer inductive losses, and do not need slip rings or brushes, and presumably do not an additional set of windings to transfer power to excite the stator?
Just could the two sets of coils be optimized for their own purposes?
Inductive coupling in an ASM is at the drive frequency (hundreds of Hz), inductive coupling in an inductive electrically excited SM can work at an arbitrary frequency, e.g. 100 kHz. You can also see in the ZF release how small the coils for transferring the excitation current are compared to the main windings, they are the small rings on the left side.
>what could make such a design better than a traditional inductive motor
Traditional induction motors, compared to these with wound rotors, are heavier and bigger in size for the same power output, and have less efficiency especially at starting/low-speeds.
So then it comes down to cost? If this is cheaper than a motor and vfd, then it's still a win?
Source: the one time I restored a massive JT Towsley jointer and had to get a vfd for the giant motor I had to put on it. In other words, I have no idea what I'm really talking about.
I know press releases never describe anything that's actually novel, but it looks like they're just describing an asynchronous induction motor? I'm wondering if this release is implicitly pretending that some "low slip ratio" motor is "basically" synchronous
Just guessing, does ZF mostly put out induction motors, and they need to make them look less unfashionable so that people choose them for new designs?
ZF is generally known for gearboxes, a huge amount of manufacturers use their transmissions in their cars. New corvette uses a zf 10 speed I think, same with vw and BMW.
Download counts don't mean very much here as I'm fairly sure both crates are common transitive dependencies. Or in other words, millions of programmers aren't individually choosing Anyhow or Thiserror on a monthly basis -- they're just dependencies of other rust crates or apps.
And agreeing with the other reply, nobody jumps up and down with joy when choosing an error handling crate. You pick the right poison for the job and try not to shed a tear for code beauty as you add in error handling.
Im assuming the "halt and catch fire" thing is only really possible on pre-microprocessor machines built of discrete components, where the individual parts are small enough to overheat by themselves if driven wrong.
I'd guess the physical CPU package of an i386 would cope just fine with the few (hundreds?) of gates toggling in that small loop.
I wonder if it might be possible to do on a modern FPGA, if you artifically create some unstable circuit and pack deliberately it in a corner of the die?
There's probably some AVX512 concoction that would be the closest equivalent on a modern X64. There's probably an easy experiment -- if that concoction makes CPU freq drop and also makes whole-package thermals drop, it /could/ be due to localized die heating.
However protobuf is ridiculously interchangeable and there are serializers for every language. So you can get your interfaces fleshed out early in a project without having to worry that someone will have a hard time ingesting it later on.
Yes it's a pain how an empty array is a valid instance of every message type, but at least the fields that you remember to send are strongly typed. And field optionality gives you a fighting chance that your software can still speak to the unit that hasn't been updated in the field for the last five years.
On the embedded side, nanopb has worked well for us. I'm not missing having to hand maintain ad-hoc command parsers on the embedded side, nor working around quirks and bugs of those parsers on the desktop side