One distinction is that compilers generally translate from a higher-level language to a lower-level language whereas Transpilers target two languages which are very close in the abstraction level.
For example a program that translated x86 assembly to RISC-V assembly would be considered a transpiler.
The article we are discussing has "Transpilers Target the Same Level of Abstraction" as "Lie #3", and it clearly explains why that is not true of the programs most commonly described as "transpilers". (Also, I've never heard anyone call a cross-assembler a "transpiler".)
I don't really agree with their argument, though. Pretty much all the features that Babel deals with are syntax sugar, in the sense that if they didn't exist, you could largely emulate them at runtime by writing a bit more code or using a library. The sugar adds a layer of abstraction, but it's a very thin layer, enough that most JavaScript developers could compile (or transpile) the sugar away in their head.
On the other hand, C to Assembly is not such a thin layer of abstraction. Even the parts that seem relatively simple can change massively as soon as an optimisation pass is involved. There is a very clear difference in abstraction layer going on here.
I'll give you that these definitions are fuzzy. Nim uses a source-to-source compiler, and the difference in abstraction between Nim and C certainly feels a lot smaller than the difference between C and Assembly. But the C that Nim generates is, as I understand it, very low-level, and behaves a lot closer to assembly, so maybe in practice the difference in abstraction is greater than it initially seems? I don't think there's a lot of value in trying to make a hard-and-fast set of rules here.
However, it's clear that there is a certain subset of compilers that aim to do source-to-source desugaring transformations, and that this subset of compilers have certain similarities and requirements that mean it makes sense to group them together in some way. And to do that, we have the term "transpiler".
Abstraction layers are close to the truth, but I think it's just slightly off. It comes down to the fact that transpilers are considered source-to-source compilers, but one man's intermediate code is another man's source code. If you are logically considering neither the input and the output to be "source code", then you might not consider it to be a transpiler for the same reasons that an assembler is rarely called a compiler, even though assemblers can have compiler-like features: consider LLVM IR, for example. This is why a cross-assembler is not often referred to as a transpiler. Of course, terminology is often tricky: the term "recompiler" is often used for this sort of thing, even though neither the input nor the output is generally considered "source code", probably because they are designed to essentially construct a result as similar as possible to if you were able to recompile the source code for another target. This seems to contrast fairly well with "decompiler", as a recompiler may perform similar reconstructive analysis to a decompiler, but ultimately outputs more object code. Not that I am an authority on anything here, but I think these terms ultimately do make sense and reconcile with each-other.
When people say "Same Level of Abstraction", I think what they are expressing is that they believe both of the programming languages for the input and output are of a similar level of expressiveness, though it isn't always exact, and the example of compiling down constructs like async/await shows how this isn't always cut-and-dry. It doesn't imply that source-to-source translations, though, are necessarily trivial, either: A transpiler that tries to compile Go code to Python would have to deal with non-trivial transformations even though Python is arguably a higher level of abstraction and expressiveness, not lower. The issue isn't necessarily the abstraction level or expressiveness, it's just an impedance mismatch between the source language and the destination language. It also doesn't mean that the resulting code is readable or not readable, only that the code isn't considered low level enough to be bytecode or "object code". You can easily see how there is some subjectivity here, but usually things fall far away enough from the gray area that there isn't much of a need to worry about this. If you can decompile Java bytecode and .NET IL back to nearly full-fidelity source code, does that call into question whether they're "compilers" or the bytecode is really object code? I think in those cases it gets close and more specific factors start to play into the semantics. To me this is nothing unusual with terminology and semantics, they often get a lot more detailed as you zoom in, which becomes necessary when you get close to boundaries. And that makes it easier to just apply a tautological definition in some cases: like for Java and .NET, we can say their bytecode is object code because that's what they're considered to be already, because that's what the developers consider them to be. Not as satisfying, but a useful shortcut: if we are already willing to accept this in other contexts, there's not necessarily a good reason to question it now.
And to go full circle, most compilers are not considered transpilers, IMO, because their output is considered to be object code or intermediate code rather than source code. And again, the distinction is not exact, because the intermediate code is also turing complete, also has a human readable representation, and people can and do write code in assembly. But brainfuck is also turing complete, and that doesn't mean that brainfuck and C are similarly expressive.