Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Abstraction layers are close to the truth, but I think it's just slightly off. It comes down to the fact that transpilers are considered source-to-source compilers, but one man's intermediate code is another man's source code. If you are logically considering neither the input and the output to be "source code", then you might not consider it to be a transpiler for the same reasons that an assembler is rarely called a compiler, even though assemblers can have compiler-like features: consider LLVM IR, for example. This is why a cross-assembler is not often referred to as a transpiler. Of course, terminology is often tricky: the term "recompiler" is often used for this sort of thing, even though neither the input nor the output is generally considered "source code", probably because they are designed to essentially construct a result as similar as possible to if you were able to recompile the source code for another target. This seems to contrast fairly well with "decompiler", as a recompiler may perform similar reconstructive analysis to a decompiler, but ultimately outputs more object code. Not that I am an authority on anything here, but I think these terms ultimately do make sense and reconcile with each-other.

When people say "Same Level of Abstraction", I think what they are expressing is that they believe both of the programming languages for the input and output are of a similar level of expressiveness, though it isn't always exact, and the example of compiling down constructs like async/await shows how this isn't always cut-and-dry. It doesn't imply that source-to-source translations, though, are necessarily trivial, either: A transpiler that tries to compile Go code to Python would have to deal with non-trivial transformations even though Python is arguably a higher level of abstraction and expressiveness, not lower. The issue isn't necessarily the abstraction level or expressiveness, it's just an impedance mismatch between the source language and the destination language. It also doesn't mean that the resulting code is readable or not readable, only that the code isn't considered low level enough to be bytecode or "object code". You can easily see how there is some subjectivity here, but usually things fall far away enough from the gray area that there isn't much of a need to worry about this. If you can decompile Java bytecode and .NET IL back to nearly full-fidelity source code, does that call into question whether they're "compilers" or the bytecode is really object code? I think in those cases it gets close and more specific factors start to play into the semantics. To me this is nothing unusual with terminology and semantics, they often get a lot more detailed as you zoom in, which becomes necessary when you get close to boundaries. And that makes it easier to just apply a tautological definition in some cases: like for Java and .NET, we can say their bytecode is object code because that's what they're considered to be already, because that's what the developers consider them to be. Not as satisfying, but a useful shortcut: if we are already willing to accept this in other contexts, there's not necessarily a good reason to question it now.

And to go full circle, most compilers are not considered transpilers, IMO, because their output is considered to be object code or intermediate code rather than source code. And again, the distinction is not exact, because the intermediate code is also turing complete, also has a human readable representation, and people can and do write code in assembly. But brainfuck is also turing complete, and that doesn't mean that brainfuck and C are similarly expressive.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: