Why is this always considered a Good Thing (TM), apart when applied to C and C++?
Why is it OK when languages and systems are bent backwards in order to avoid breaking stuff that works, but the two languages that together support a huge part of the running software are expected to up and break ties with their glorious past and present?
Let me talk about my experience.
I was (yes was) mostly an Haskeller. I loved using Haskell, and even a long time ago (in 2007 I think) when I learned it, the book to learn the basic of the language had snipped that no longer worked.
But I still loved, it. And it changed every year, and I was even part of the proponent of the changes.
But after a while, you realise that your old project stop working.
That if you want to use a new library, you also need to update many dependencies. But each one has breaking changes, so you start updating your code.
Mainly, if you wrote some code, and forget about it, it will no longer work or will be very hard to use with newer libraries. After a while this became very tedious.
In Clojure, the code I wrote 10 years ago is still working perfectly today.
The application still launches.
If a bug is discovered and I need to fix it, for example, by upgrading or adding a new lib, I am not afraid to loose hours finding an fixing my existing code that suddenly become deprecated.
So yes, stability is VERY important for me now.
And last but not least, Rich Hickey talked about it in this talk and make a lot of very great points:
There's a happy medium, I think, and I also imagine that depends on scope of change and the kind of applications involved. Speaking personally, I'm OK rewriting code, say, every decade. In the Ruby world, things moved so fast that my code would rot every six months. Now that I've been maintaining the same software for 20 years I'm really starting to care about that. ;-)
I maintain dependencies for a lot of people--nowhere near operating systems or stdlibs, but my OSS gets fairly broad use--and I try very hard not to change their APIs too much! It's a real concern.
I'll also note a lot of objections to the way C++ does backwards compatibility is their adherence to an ABI which they refuse to break, but also refuse to say they'll never break.
Many of the problem that can't be fixed for backward compatibility reasons are because they'd break the ABI, not new code. I think that's very different from other language's policy, which, from the ones I'm more familiar with, is about building old and new code together, rather than linking old and new code together.
It makes for a much more restrictive, but also ambiguous and not guaranteed, set of requirements on any change.
First one is a security nightmare, second one probably has more features and intricacies than other top three (by popularity) programming languages combined.
> Backwards compatibility is hugely important
Why is this always considered a Good Thing (TM), apart when applied to C and C++?
Why is it OK when languages and systems are bent backwards in order to avoid breaking stuff that works, but the two languages that together support a huge part of the running software are expected to up and break ties with their glorious past and present?