Hacker Newsnew | past | comments | ask | show | jobs | submit | self_awareness's commentslogin

Nah.

> Most technical problems are really people problems. Think about it. Why does technical debt exist? Because requirements weren't properly clarified before work began. Because a salesperson promised an unrealistic deadline to a customer. Because a developer chose an outdated technology because it was comfortable. Because management was too reactive and cancelled a project mid-flight. Because someone's ego wouldn't let them see a better way of doing things.

I mean true, technical debt is people's problem. Why it exists? Because there are not enough people in the team. Because they are not skilled enough. Because the devloper has promised they'll finish up the task before Christmas but failed to deliver.

I don't really like marketing, but they serve an important function: they convert code to money. Code itself isn't worth anything, only marketed code is worth something. That's why it's so hard to refactor.

Also there's this unofficial law in programming: code that is easy to refactor at some point is going to be replaced by code that is hard to refactor. Sometimes people misidentify what exactly is code debt and convert blocks of code that aren't code debt into exactly code debt which is later impossible to remove, because they thought they knew better.


Rust fork that works on this LLVM fork, for 6502, genering code that can be executed on a Commodore-64: https://github.com/mrk-its/rust-mos

Well, there was also Java on Amiga project:

https://www.mikekohn.net/micro/amiga_java.php


Doesn't work for me, and the account creation window itself is buggy with poor UX, throws a general error and requires me to retype password each time I try to use a different set of settings.


Tried to use Nim with VBCC to cross-compile to Amiga, but I failed. I think Nim does some pretty heavy assumptions about the C compiler that is used to compile the generated code.


Just in case you aren't in the loop, but there are gcc and llvm based Amiga cross compilers.


GCC cross-compiling for the Amiga is available from https://franke.ms/git/bebbo/amiga-gcc for a standalone toolchain, and https://github.com/BartmanAbyss/vscode-amiga-debug for one that requires VSCode.

I'm not aware of any working LLVM solution? All I know is that LLVM supports MC680x0 as a backend, can spit out 68k-but-non-amiga-objects and some brave souls have trying to use vlink or mold to produce Amiga executables. Have you seen any working LLVM-based Amiga (680x0 in hunk format) cross-compilers in the wild?


I probably just confused m68k support with HUNK support. Maybe one could use https://github.com/BinaryMelodies/RetroLinker



Actually I wasn't able to do it also with Bebbo's GCC fork.

Never used Nim before so I might be doing something wrong though.

I wish retro Amiga had Rust support. I've briefly skimmed what would be necessary to do, based on the rust-mos (Rust for commodore-64 fork), but I'm too weak in LLVM internals to actually do it.


> Never used Nim before so I might be doing something wrong though.

With Nim on weird targets you usually want:

- OS target = any

- Memory Management = ARC

- malloc instead of default Nim allocator

- turn off signal handler (if not POSIX)

- disable threads (most of the time)

Then look at how C is compiled and copy all compiler+linker flags to your Nim configuration. Here's an absolute minimal `config.nims` I used to compile Nim for C64 with LLVM-MOS[1] toolchain:

    import std/strutils
    
    --cc:clang
    --clang.exe:"mos-c64-clang"
    
    --os:any
    --cpu:avr
    --mm:arc
    
    --threads:off
    --define:usemalloc
    --define:noSignalHandler
    
    let args = [
      "-isystem $PWD/../mos-platform/c64/include",
      "-I$PWD/../mos-platform/c64/asminc",
      "-L$PWD/../mos-platform/c64/lib",
      "-mlto-zp=110",
      "-D__C64__",
    
      "-isystem $PWD/../mos-platform/commodore/include",
      "-I$PWD/../mos-platform/commodore/asminc",
      "-L$PWD/../mos-platform/commodore/lib",
      "-D__CBM__",
    
      "-isystem $PWD/../mos-platform/common/include",
      "-I$PWD/../mos-platform/common/asminc",
      "-L$PWD/../mos-platform/common/lib",
      "--target=mos",
      "-flto",
      "-Os",
      ].join(" ")
    
    switch("passC", args)
    switch("passL", args)
Nim side was easy, because I have already compiled Nim to WASM at that point and the flags are similar. Hard part was figuring out the C compiler flags: e.g. cmake structure and why compiler complains about missing symbols, when they're not missing (answer: include/lib order is very important).

[1] https://github.com/llvm-mos/llvm-mos


Rust --> WASM --> Wasm2C --> Bob's your uncle. Maybe.


And unofficial "Tier 5" Rust Target is... for Commodore-64:

https://github.com/mrk-its/rust-mos

It works, and builds binaries that are ready to be executed by Vice emulator.


Your argument is like... "once you learn C++ you have your whole processor at your disposal, you don't need to wait for any software because you can write it yourself."


That just highlights some confusion about Emacs. It’s more akinto Unix and the shell as a whole. That’s why I said VM. If you know perl and have a whole host of utils from a unix box, you can script the workflow you want quite easily, especially if you have access to the cpan libraries.

The same thing can happen with emacs. There’s a lot of low level interfaces (network, process,…) and some high level ones regarding the UI. Then there’s a lot of utils andd whole software built with those. All modifiable quite easily. As another commenter had put it, you don’t even need to save a file. You just write some code, eval it, and you have your new feature. If you’re happy with it, you add it to your config or create a new module (very simple). So elisp covers the whole range from simple configuration to whole software.


The problem is that you need to spend 20 years to get out of the "beginner" zone.


The curse as a power user is that you want to know how it works. I let that feeling go with emacs. I've been happily using it since. My first gateway and killer use case was magit. Life with git will never be the same.


I’m 25 years in and still firmly in the beginner zone


There are no "Emacs experts". Bedrock of Emacs is Lisp. Lisp is the essence of computation itself. It's both simple to understand (5 basic special forms) and impossible to master at the same time - you can construct entire universes with those 5 basic building blocks - quote, if, lambda, let, and set. If someone finds something cannot be achieved in Emacs they either are wrong, or wrong at the point in time - theoretically, anything can be done in Emacs, it's just a matter of time. So, technically, it's impossible to capture all possible features of Emacs, the totality is infinite.

In comparison most other languages are 'closed' - e.g., C is a closed language. Its spec is finite and fixed (C99, C11, C17, etc.). You can genuinely master it: all keywords, all standard library functions, all undefined behaviors, all edge cases. There's a ceiling.

Lisp is unusual, The language itself is a tool for language-building. Lisp is 'open'. There's no canonical "complete" set of what exists. Thus there's never completion or "mastery"


For me, VSCode implements everything that I've always expected from Emacs/Vim.

I've spent years to configure emacs/vim to be a good programming editor. Years, multiple configurations, vanilla configs, space/doom emacs configs, multiple predefined configs for vim/neovim. Something always was broken, something was missing, something was non-optimal just below the tolerance line. Missing features, discontinued packages, initialization errors, bad versions, "known issues", LSPs not starting, packages replaced by some newer shinier package with different usage, cryptic setups that are wrapped in "convenience layers" that obscure details, making it completely incomprehensible.

Then VSCode came and it had everything. Remote development is trivial through ssh. Completion simply works without any setups. Massive number of languages supported. It's a mess inside, but the UX is more stable and more consistent than anything I've ever seen in emacs/vim. Sometimes something breaks, but I can restart the window backend without closing the app easily.

This is really telling. Despite dedicating years to configure an "infinitely configurable" system, I wasn't able to achieve anything stable. I've given up and i just use VSCode daily. This way, I have more than I ever had with emacs/vim.

The only thing I have from vim that's left is the keyboard layout. For this, I'm thankful to Vim, but the editor itself for me is just for editing config files. I don't even have Emacs installed anymore.


This, most people that try and use these legacy editors spend most of their time configuring it get it to be as good as vs code and usually fail. A lot of wasted time and frustration when one click gives you a perfectly modern fast editor with a smorgasbord of great extensions that just work. I do use vim for quick editing of files in the terminal but never for serious work.


Not sure where you get "most" from. Personally I've found the exact opposite: Despite having been forced by work constraints to use most major IDE platforms at one point or another, sometimes for years at a time, I always come back to emacs with great relief and find it better in pretty much every way. I know better than to assume my experience is that of "most" people, though.


The data shows VS Code is used by double digit % of developers, whereas Emacs is less than 1%, I think that qualifies for most.


That's not the point - McDonald's has 40000 joints - the most popular restaurant in the world. Still doesn't make it the best food option.

Yes, Emacs is not popular, but if you look deeper, you may find unsurprisingly that most Emacs coders are strong developers. That correlation isn't coincidental - you don't stick with Emacs unless you're willing to learn; it effectively teaches you about Lisp, extensibility, and programming in general.

Yet, they are not talking about general popularity of editors among devs, but about people who ever tried Emacs - the argument is that the majority of them try, fail and abandon it. For which obviously there's no polemical (or otherwise) data points.


I've installed emacs now on ArchLinux Wayland system and its window refuses to resize (on purely default settings), and freezes the contents until I move it. Very refreshing indeed.

I bet this is some kind of a known issue, but that just reinforces my original point above.

edit: yeah, here it is: https://bugs.kde.org/show_bug.cgi?id=509871

I mean, how it's not an Emacs issue if it only happens with emacs?


> I've installed emacs now on

I never had what you describe. The variability there could be almost random - which version of Emacs you're installing? How are you installing it? Are you trying to build it from source? What renderer are you using - there are several: Lucid, Motif, GTK, NS, W32, Haiku, Android, Cairo, Pgtk. Perhaps it's installing different renderer instead of using pgtk. Maybe it's not a bug with Emacs but with the package that bundles it for your distro?

> how it's not an Emacs issue if it only happens with emacs?

It does seem to be an Emacs issue. But it is a specific issue that happens on the combination of your hardware and your OS configuration.


Yes, I'm always having "special" problems. It's probably due to the fact that I jump around platforms a lot between Linux, macOS and Windows, mixed GUI and ssh.

For example, macOS emacs always starting at the bottom of the window stack instead of the top. macOS emacs having different font notation than Linux emacs, so maintaining common config is hard. Terminal emacs -nw having its own set of rules, and M-x needs to be addressed with ESC x. Etc, etc.


Yeah, I admit, fair complaints. Emacs can be tricky to render things exactly how you like - I use it on Mac, Linux, GUI and terminal and do have different semantics for each.

The tradeoff is that Emacs does let you handle it all - you're not forced to accept platform defaults like in some editors. Most editors have their UI/behavior largely baked in by the platform. You can customize colors and keybindings, but the fundamentals - window management, font rendering, how system keys work, terminal integration - are mostly dictated by whether you're on Mac, Windows, or Linux.

So when you have mac Emacs behaving differently than Linux Emacs, it's not because the software forced you into that corner - it's because the underlying systems are different and Emacs exposes that difference rather than hiding it behind a unified abstraction layer.

Emacs gives you the rope to make things consistent across platforms, but also the rope to hang yourself. Other editors pre-tie the knot for you.


> one click gives you a perfectly modern fast editor with a smorgasbord of great extensions that just work

I use over 300 hundred packages in my Emacs setup. I honestly not sure if I can install even half of that number of VSCode extensions and expect it to still run smoothly, maybe people do that, I just don't know.

They are called "packages" and not "extensions" for a reason - an extension that e.g., ships with a browser has limitations. In Emacs I can reuse functions of one package in another - in VSCode they have to be explicitly exposed via public API, must be activated in order, they need to be aware of their extension IDs, there's no discovery mechanism - in Elisp, I don't have to deal with any of that.

in Emacs I can explore, modify and bend the source code of any package - built-in or third-party. I can do it in a way that is simply impossible anywhere else. I can granularly change selected behavior of any command without having to rewrite it fully.

That "just works™" part I don't ever buy it - all software is faulty by nature. In Emacs, when something fails - I know exactly how to troubleshoot it, to the specific line in the specific package code; I can profile; debug and trace it. I can modify code in question in a scratch buffer and immediately check how it affects things. Not only I don't have to restart anything, I don't even have to save that code anywhere.

You call it "a legacy editor" without the slightest clue of what Emacs hackers are capable of doing - for what the most "modern" alternatives simply have no answers.

I agree, Emacs is not for the faint-hearted - many people (maybe most) lack the patience required to grok it. Yet make no mistake, those who have tamed this beast are not staying in it simply because "they don't know any better". They know - something better is yet to be made, if ever. VSCode is great, yet still not better.

Learning Emacs has liberated me from experiencing tool-FOMO ever again - I can switch to VSCode without abandoning Emacs, and I can even probably figure a way to control one from another if I get annoyed enough; I just never found a pragmatic reason to use VSCode more. So really, I have zero envy or crave to even become a full-time VSCode user; if anything, I might be forced into it by circumstances, but that's a different story.


> In Emacs I can reuse functions of one package in another

You say like it's some kind of feature, but for me it sounds like a potential for name clash, and using private API which is prone to change/break.

> You call it "a legacy editor" without the slightest clue of what Emacs hackers are capable of doing

The thing is that is was VSCode which has brought us LSP, not "emacs hackers". It was VSCode which has brought us LLM Agent mode. All editors are catching up to VSCode, not the other way around.

LSP, LLM, ssh/docker/podman remote development, what's the Emacs answer to those? (I mean, nowadays vim+emacs have their own LSP clients built-in, years after VSCode)


I'd prefer Emacs Lisp had Common Lisp style packages (with namespaces and exports), but in practice it's not much of a problem. Emacs is intended to be programmed by its users, so you don't have "private APIs" in the same way proprietary systems do. Actually internal functions and variables are usually marked with naming conventions these days.

LSP is a great thing to have, but it's actually much less capable than something like SLIME. If you've used SLIME, you'll see there's no comparison. LSP is a lot better than the nonsense I had to go through to get Java completions in the early 2000s, and I'm thankful to have it.


> You say like it's some kind of feature

Oh, you just have no idea. It feels absolutely empowering and immensely liberating to have access to EVERY single line of code running and affecting your system editor. Whenever I need to solve a problem, I don't need to ask permissions, send PRs, dig through API documentation, google for answers, yell at LLM for not giving me answers - I can modify any behavior of any function on the fly and just move on. All that "it just works™" promise of other editors quickly evaporates when I, as a programmer, feel out of control of whatever happens on MY computer. Sure, yes, there's always a possibility that my "stupid hacks" just break for no apparent reason. Except, it rarely happens in practice, and when it does, I know how to quickly fix or put a workaround, because I precisely can pinpoint the exact line of code. Last time when something broke and it took me more than ten minutes to figure it out was about two years ago - the combination of multiple upstream updates and my lousy customization on top confused me for some time - it's all about trade offs, I'm happy to spend 15 minutes once every few years to fix some shit, if that gets me complete and total control over my environment.

I'm a hacker, not a florist - I want precision and complete transparency, not pretty buttons. I don't want any magic - I can't trust some extension to "just work" on a button click, without precisely knowing what it installs, where, and to what extent.

> what's the Emacs answer to those?

Ah, just like I said before, you're talking about things as if you're staring at the surface of the calm water of a lake. Except for an ocasional splash, it may seem lifeless to you, yet you don't have the faintest idea of its depths or what lies beneath.

There's currently an implosion of different LLM packages for Emacs - gptel, gptel-agent, ECA, agent-shell, claude-code, claude-code-ide, monet, claude-repl - and numerous others I haven't even heard of, these are just off the top of my head.

Emacs community may not be big enough and may lack incentives to constantly innovate, but they have no problem borrowing ideas from other places.

> All editors are catching up to VSCode

Wake me up when they figure out things like indirect buffers and executable source blocks in different PLs that can pipe data into one another, or when someone uses VSCode to control their window manager. Or when I can use literate programming to manage my dotfiles. And I haven't even gotten to the REPL part, even with Joyride it still falls short - there's no "I can hook to everything and redefine anything" in Code.


> For me, VSCode implements everything that I've always expected from Emacs/Vim.

Good. For me, VSCode unlikely will ever become anything that I expect from my text editor.

For coding, sure, Emacs may not be great for any specific language except some Lisps, but for plain text manipulation, OMG, Emacs still is the king.

I just can't see it ever replacing it for note-taking - just yesterday I was showing someone "reproducible research" workflow example in Org-mode where I had a source block that sends http requests, then passes that into a bash block where the results get converted to EDN, then connected it to a Clojure REPL, explored and visualized data in it. Name one system that allows you to seamlessly pipe results of one computational block into another, mixing different languages.

Today I made a table with some formulas to calculate some numbers. Does your note-taking app has spreadsheets-capable tables and embedded math formulas?

Two weeks ago I was dealing with a slew of logs coming from k8s pod, and I want it to explore it in my editor - I piped from terminal directly into an Emacs buffer. I can similarly pipe the content of any given buffer into a different unix command.

I control video playback directly from Emacs - it's very nice when I'm taking notes. My pdf documents blend-in into my color theme, which btw. changes based on time of day automatically - Emacs has a built-in solar and lunar calendars.

I search through my browser history and through open tabs directly from Emacs - it's great for finding and grabbing specific piece of text from the browser - so I can put it into my notes.

I rarely need to open Jira in my browser, Emacs understands that "XYZ-12345" is a ticket and shows the ticket description in a tooltip, I can browse its content in-place, same is for RFCs. My Emacs understands that a url is a PR and allows me to review it in-place. It knows when it's looking at a GitHub repo url and allows me to clone it with a keypress, or explore CircleCI logs.

I never type anything longer than three words in any app. I've built a workflow that allows me to quickly move the text into my editor and back into the app. Why would I do it differently? I have all the tools I need - thesaurus, spellchecker, translation, etymology lookup, LLMs, etc.

Finally, once I got the plain text under control, I realized that code is nothing but structured text. I have things like fetching the path to exact line on GitHub, while supplementing the fully-qualified name of the function - my colleagues don't have to guess what they're staring at, they can simply see it without ever opening the link.


OKAY.

I'll install it again. I hope you're happy.

Having a text based Jira frontend seems attractive. (I've actually written myself a cmdline tool that uses Jira API so I don't need to visit the website)


> I hope you're happy

Not until you are happy, it makes me sad when Emacs makes people sad for whatever reason.

> Having a text based Jira frontend seems attractive

Yeah, I can't even start describing it, proly gonna make a video or something.

Like I would get the cursor on a plain text like "XYZ-34857" - it shows me the popup with the ticket description. If I ever would want to add the status - todo/done/etc., or assignee, or something else to the popup - that's a simple change. From there I can browse the ticket, I can convert the string to a url with the description - it's smart enough to recognize the mode I'm in and makes the proper markup. I can generate a branch name based on that ticket description, etc.

To clarify - anyone can do the same, I'm just delegating the task to go-jira - a cmd-line tool. You don't really need Emacs for that - it's doable in neovim and vscode and even in vanilla terminal. Yet the simplicity of making it work via Lisp is just unmatched experience. These days I would ask a model, it would build a prototype, I would iterate on it on the fly. It feels like playing a video game. I don't even blink - I see a problem - I'd start writing some Elisp in a scratch buffer. I keep hearing "I don't have time to tweak my config", but I'm not really tweaking anything - I'm just hacking solutions for the real problems that arise - I'm just being the definition of a programmer. And no need for sophisticated packages - my Jira requirements for now are satisfied with simple hacks in my config:

https://github.com/agzam/.doom.d/blob/main/modules/custom/ji...


Well, another option would be to use a C++ compiler, which supports templates, but limit the use of classes through a coding convention standard.


Not sure why this is down voted when the whole point of TFA is to torture the C language into doing something it can't really do. I guess there's an unspoken assumption in TFA that you are stuck using C and absolutely cannot use a different language, not even C++?


This looks far less tortured to my eye than C++. I think a lot of us have this self-imposed rule, for whatever reason, that we absolutely will not use C++.


> Well, another option would be to use a C++ compiler, which supports templates, but limit the use of classes through a coding convention standard.

When the other option is "ask the developers to practice discipline", an option that doesn't require that looks awfully attractive.

That being said, I'm not a fan of the described method either. Maybe the article could have shown a few more uses of this from the caller perspective.


"ask the developers to practice discipline" is a baseline requirement for coding in C


And it hasn't worked in practice. C unfortunately does not have a very big pit of success -- it's way too hard to do the right thing and way too easy to do the wrong thing.

The solution to this doesn't have to be "rewrite everything in Rust", but it does mean that you need to provide safe, easy implementations for commonly-screwed-up patterns. Then you're not asking people to be perfect C programmers; you're just asking them to use tools that are easier than doing the wrong thing.


> "ask the developers to practice discipline" is a baseline requirement for coding in C

Sure, but since there's 10x more opaque footguns in C+++, there is much less discipline needed than when coding in C++.

The footguns in C are basically signed-integer over/underflows and memory errors. The footguns in C++ include all the footguns in C, and then add a ton more around object construction type, object destruction types, unexpected sharing of values due to silent and unexpected assignments, etc.

Just the bit on file-scope/global-scope initialisation alone can bite even experienced developers who are adding a new nonlocally-scoped instance of a new class.


Unfortunately the majority has failed to attend the temple classes on such practices.


If only.


It never seems to work out that way though. C++ is just too large a language, and it gets bigger with each revision. The minute you hire a senior/principal engineer who loves C++, they'll make the case to enable "just this one" feature, and before you know it, you've got a sprawling C++ code base, not a "C with a light dusting of C++" code base.


C folks rather reproduce badly C++ than acknowledge its Typescript like improvements over C.


C is sometimes used where C++ can't be. Exotic microcontrollers and other niche computing elements sometimes only have a C compiler. Richer, more expressive languages may also have additional disadvantages, and people using simpler, less expressive languages may want to enjoy some useful features that exist in richer languages without taking on all of their disadvantages, too. Point being, while C++ certainly has some clear benefits over C, it doesn't universally dominate it.

TS, on the other hand, is usable wherever JS is, and its disadvantages are much less pronounced.


It isn't the 1980's any longer, there isn't a chip in such scenarios other than PIC class, even AVR get to use C++.


8051s are still programmed almost entirely in C. There are C++ compilers available, but they're rarely used. Even on STM32, C is more popular. There's a perception -- and not an unsubstantiated one -- that C++'s can more easily sneak in operations that could go unnoticed.

C++ has many advantages over C, but it also brings some clear disadvantages that matter more when you want to be aware of every operation. When comparing language A against language B, it's not enough to consider what A does better than B; you also have to consider what it does worse.

That's why I don't think that the comparison to TS/JS is apt. Some may argue that C++ has even more advantages over C than TS has over JS, but I think it's fairly obvious that its disadvantages compared C are also bigger. For all its advantages, there are some important things that C++ does worse than C. But aside from adding a build step (which is often needed, anyway), it's hard to think of important things that TS does worse than JS.


More like Assembly, if devs are serious enough.

If there is any C, it is hardly any different from using C as cheap macro Assembler, with lots of inline Assembly.

Also definitely a 1980's CPU.

It is more than apt, until C gets serious about having something comparable to std::array, span, string_view, non null pointers, RAII, type safe generics, strong typed enumerations, safer casts,...

Among many other improvements, that WG14 will never add to C.


Again, when comparing two languages you can't just look at the advantages one of them has over the other. There's no doubt C++ has many important advantages over C. The reason to prefer C in certain situations is because of C++'s disadvantages, which are as real as its advantages. Even one of the things you listed as an advantage - RAII - is also a disadvantage (RAII in C++ is a tradeoff, not an unalloyed good). A comparison that only looks at the upsides, however real, gives a very partial picture.

Alongside all of its useful features, C++ brings a lot of implicitness - in overloading (especially of operators), implicit conversion operators, destructors, virtual dispatch - that can be problematic in low-level code (and especially in restricted environments). Yes, you can have an approved subset of C++, and many teams do that (my own included), but that also isn't free of pitfalls.


There isn't anyone pointing a gun to someone's head forcing them to using 100% of all C++ features in every single project.

There is an endless list of edemic C pitfalls that WG14 has proven not to care to fix.

Auto industy has come up with MISRA, initially for C, exactly because of those issues.

Ideally both languages would be replaced by something better, until it doesn't happen, I stand by my point, the only reason to use C instead of C++, it not having a C++ compiler available, or being prevented to use one, like in most UNIX kernels.

I hold this point of view since 1993, having used C instead of C++ was only when obliged to deliver my work in C, due to delivery requirements where my opinion wasn't worth anything to the decision makers.

So if I was already using C++ within the constraints of a 386 SX, running at 20 Mhz limited to 640 KB ( up to 1 MB) RAM size, under MS-DOS, I certainly will not change it for the 2025 computing world reality.


> There isn't anyone pointing a gun to someone's head forcing them to using 100% of all C++ features in every single project.

Tell me how to use C++, without using RAII. You can't. Not being able to automatically allocate without also invoking the constructor is what I dislike the most in C++. Other things are, that you can never be sure, what a function call really does, because dynamic dispatch or output parameters aren't required to be explicit.

> I hold this point of view since 1993, having used C instead of C++ was only when obliged to deliver my work in C, due to delivery requirements where my opinion wasn't worth anything to the decision makers.

https://floooh.github.io/2019/09/27/modern-c-for-cpp-peeps.h...

C isn't ANSI C anymore!


I, too, wrote C++ for the 386 in the early nineties, and I, too, generally prefer it to C, but the fact remains that it has some real disadvantages compared to C. From the very early days people talked about exercising discipline and selecting a C++ subset - and it can and does work - but even that discipline isn't free. Avoiding destructors, for example, isn't easy or natural in C++; explicit virtual dispatch with hand-rolled v-tables is very unnatural.


> C folks rather reproduce badly C++ than acknowledge its Typescript like improvements over C

This is a rather crude misrepresentation; most C programmers who need a higher level of abstraction than C reach for Java, C# or Go over C++.

IOW, acknowledging that C++ has improvements over C still does not make the extra C++ footguns worth switching over.

When you gloss over the additional footguns, it looks like you're taking it personally when C programmers don't want to deal with those additional footguns.

After all, we don't choose languages based on which one offers the most freedom to blow your leg off, we tend to choose languages based on which ones have the most restrictions against blowing your leg off.

If your only criteria is "Where can I get the most features", then sure, C++ looks good. If your criteria is "Where are the fewest footguns", then C++ is at the bottom of the list.


Nah, it is called life experience meeting those kind of persons since the 1990's, starting on BBS forums.

My criteria is being as safe as Modula-2 and Object Pascal, as bare minimum.

C++ offers the tools, whereas WG14 has made it clear they don't even bother, including turning down Dennis Ritchie proposal for fat pointers.


>> looks like you're taking it personally

> it is called life experience meeting those kind of persons

Looks like you are confirming that you are taking it personally.

I don't understand why, though.

You cannot imagine a programmer that wants fewer footguns?


Yes, when careless programmers are responsible for critical infrastructure systems, and rather take a YOLO attitude to systems programming.


> Yes, when careless programmers are responsible for critical infrastructure systems, and rather take a YOLO attitude to systems programming.

Well, that's a novel take: "Opting for fewer footguns is careless". :-)

It's probably not news to you that your view is, to put it kindly, very rare.


Is it? Ask the governments and respective cyber security agencies.

And to finish this, as I won't reply any further,

"A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."

-- C.A.R Hoare's "The 1980 ACM Turing Award Lecture"


The problem you facing is that any argument or quote that is used to dismiss C due to safety concerns applies to C++ more than it applies to C.

Like this quote here that you posted.


Nope, and this is really the last comment.

First of all the quote there is indirectly about C with UNIX's adoption starting out of Bell Labs, C++ would only become known to the world in 1989, with the release of CFront 2.0.

Second, while you certainly can code C in C++, just like one can code JavaScript in TypeScript, the tools are there on the type system for anyone that cares, tools that WG 14 has proven not to care in 50 years of history.

Third, all C compilers worth using in professional scenarios are nowadays written in C++.

As last point, while C++ has the heads up over C in type system improvements, it is by no means the final answer in systems programming, ideally both programming languages should be replaced by better, safer ones, which there are a few already to chose from.

Unfortunely as long as LLVM and GCC are around, industry standards based on C and C++, there is little hope that those improved languages fully take over.

Thus when it comes to C vs C++, in such world where else gets to replace them in all scenarios, C++ is the only answer to that duality of choice, unless one wants to keep re-inventing solutions (badly) for answers C++ is providing since 1989.


Well, look; I replied initially because you're misrepresenting "I want fewer footguns" with "I don't care".

The only question was whether you're doing it on purpose or whether you really do think that there are zero programmers who want fewer footguns.

As far as the C vs C++ thing goes, if your measure is "How many OPTIONAL features does a language give me WRT safety", then sure, C++ is optionally safer than C.

If the measure is "How many extra footguns does the language provide", then no, C++ cannot, by any objective measure, be safer than C.

Since you're constantly re-framing the discussion towards "More features" and away from "fewer footguns", I think it is safe to say that "extra footguns", for you anyway, doesn't mean "less safe".

The way you constantly re-frame, though, reflects quite poorly on you - it's pretty obvious from the very first thread I am counting the footguns as the measure of how unsafe a language is. That's not an irrational measure, and yet you are willing to strawman the argument to make it seem irrational.


Actually this was my first instinct too. Just limit what you use c++ for and write c code with templates and be done with it.

The problems I am guessing start when you are tempted into using the rest of the features one by one. You have generics. Well next let's get inheritance in. Now a bit of operator overloading. Then dealing with all kinds of smart pointers...


What would be the detrimental effect of using smart pointers?


Oh i didnt mean that smart pointers were bad or detriemental. Or even that any of the other gazzilion features of C++ were bad (or detrimental) on their own. Just that C++ has sooo many features that i dont think there is any one in the world who knows who any one feature X interacts with feature Y so your ability to reason about what you have written in C++ is significantly lowered.

If you can say I will only use features A, X an Z of C++ and some how enforce it then you are mitigating a lot of the risk. IIRC Carbon (Google's new lang to migrate their code off C++) came about because they themselves used C++ in a very bounded way (I recall a lot of the templates they created for their use of C++ actually resembled how Go code looked like and may have been one of the reasons for creating Go). But I am not sure how many mere mortals have that kind of tooling and discipline to limit themselves?


Unclear, not explicit ownership.


Is there a way to pass compiler switches to disable specific C++ features? Or other static analysis tools that break the build upon using prohibited features?


There is -fno-rtti, -fno-exceptions, -Wmultiple-inheritance, -Wvirtual-inheritance, -Wnamespaces, -Wsuggest-final-types, -Wsuggest-final-methods, -Wsuggest-override, -Wtemplates, -Woverloaded-virtual, -Weffc++, -fpermissive, -fno-operator-names and probably many more. The warnings can be turned into errors, e.g. -Werror=namespaces.


No two development groups agree on the desired features, so it would have to be a custom compiler plugin.

You could start with a Perl script that looks at the output of “clang++ -Xclang -ast-dump” and verifies that only permitted AST nodes are present in files that are part of the project sources.


For sure no two groups want the same subset but is there no "standard way" to opt in / out in the ecosystem? It's strange that there are large orgs like Google enforcing style guidelines but manual code reviews are required to enforce it. (or may be my understanding of that's enforced is wrong)


Yes, via static analysis tools it is possible.

As usual with additional tooling, there must exist some willingness to adopt them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: