This is great because it means someday (possibly soon) Linux development will slowly grind to a halt and become unmaintainable, so we can start from scratch and write a new kernel.
I don't consider C# a very large language. Most of what has been added removed boilerplate code. Swift, a much younger language, is way more complicated IMO
Someone doing maintenance work on C# project might find code going all the way back to C# 1.0.
Also improvements to low level programming, being done since C# 7, a few semantic changes, aren't for removing boilerplate code.
Then since a language is useless without its standard library, there have beem plenty of changes on how to do P/Invoke, COM interop, development of Web applications, and naturally knowing in what release specific features were introduced.
I don’t understand why today’s laptops are so large. Some of the smallest "ultrabooks" getting coverage sit at 13 inches, but even this seems pretty big to me.
If you need raw compute, I totally get it. Things like compiling the Linux kernel or training local models require a high level of thermal headroom, and the chassis has to dissipate heat in a manner that prevents throttling. In cases where you want the machine to act like a portable workstation, it makes sense that the form factor would need to be a little juiced up.
That said, computing is a whole lot more than just heavy development work. There are some domains that have a tightly-scoped set of inputs and require the user to interact in a very simple way. Something like responding to an email is a good example — typing "LGTM" requires a very small screen area, and it requires no physical keyboard or active cooling. checking the weather is similar: you don’t need 16 inches of screen real estate to go from wondering if it’s raining to seeing a cloud icon.
I say all this because portability is expensive. Not only is it expensive in terms of back pain — maintaining the ecosystem required to run these machines gets pretty complicated. You either end up shelling out money for specialized backpacks or fighting for outlet space at a coffee shop just to keep the thing running. In either case, you’re paying big money (and calorie) costs every time a user types remind me to eat a sandwich.
I think the future will be full of much smaller devices. Some hardware to build these already exists, and you can even fit them in your pocket. This mode of deployment is inspiring to me, and I’m optimistic about a future where 6.1 inches is all you need.
I dunno. It kinda works, and points for converting the whole article. But something is lost in the switch-up here. The size of a laptop is more or less the size of the display (unless we’re going to get weird and have a projector built in), so it is basically a figure-of-merit.
Nobody actually wants more weights in their LLMs, right? They want the things to be “smarter” in some sense.
Try being over 30 sitting at a desk your while life and then try and use a 13” screen. Eye strain is a huge deal.
My opinion on this changed drastically when I started interacting with people outside of tech and not my own age. A device you struggle to see is miserable.
A typical use case for large laptops is when you want to store it away after work or when you only carry it occasionally. I have a PC for coding at home, but use a thinkpad with the largest screen I could get for coding in my camper van (storing it away when not using it, because of lack of space) or when staying at my mother's home for longer (setting it up once at the start of my visit). I also have another very small, light and inexpensive subnotebook that I can carry around easily, but I rarely use it these days and not for coding at all.
AI creates the possibility to disrupt existing power structures - this is the only reason it gathers so much focus. If it were merely tool that increased efficiency of work, few would care so much. We already frequently get such tools which draw far less attention.
So far all it has done is entrench existing power structures by dis-empowering people who are struggling the most in current economic conditions. How exactly do you suppose that's going to change in the future if currently it's simply making the rich richer & the poor poorer?
I see that C++26 has some incredibly obscure changes in the behavior of certain program constructs, but this does not mean that these changes are improvements.
Just reviewing the actual hardening of the standard library, it looks like in C++26 an implementation may be considered hardened in which case if certain preconditions don't hold then a contract violation triggers an assertion which in turn triggers a contract violation handler which may or may not result in a predictable outcome depending on one of 4 possible "evaluation semantics".
Oh and get this... if two different translation units have different evaluation semantics, a situation known as "mixed-mode" then you're shit out of luck with respect to any safety guarantees as per this document [1] which says that mixed-mode applications shall choose arbitrarily among the set of evaluation semantics, and as it turns out the standard library treats one of the evaluation semantics (observe) as undefined behavior. So unless you can get all third party dependencies to all use the same evaluation semantic, then you have no way to ensure that your application is actually hardened.
So is C++26 adding changes? Yes it's adding changes. Are these changes actual improvements? It's way to early to tell but I do know one thing... it's not at all uncommon that C++ introduces new features that substitute one set of problems for a new set of problems. There's literally a 300 page book that goes over 20 distinct forms to initialize an object [2], many of these forms exist to plug in problems introduced by previous forms of initialization! For all we know the same thing might be happening here, where the classical "naive" undefined behavior is being alleviated but in the process C++ is introducing an entire new class of incredibly difficult to diagnose issues. And lest you think I'm just spreading FUD, consider this quote from a paper titled "C++26 Contracts are not a good fit for standard library hardening" [3] submitted to the C++ committee regarding this upcoming change arguing that it risks giving nothing more than the illusion of safety:
>This can result in violations of hardened preconditions being undefined behaviour, rather than guaranteed to be diagnosed, which defeats the purpose of using a hardened implementation.
I believe there were some changes in the November C++ committee meeting that (ostensibly) alleviates the some of the contracts/hardening issues. In particular:
- P3878 [0] was adopted, so the standard now forbids "observe" semantics for hardened precondition violations. To be fair, the paper doesn't explicitly say how this change interacts with mixed mode contract semantics, and I'm not familiar enough with what's going on to fill in the gaps myself.
- It appears there is interest in adopting one of the changes proposed in D3911 [1], which introduces a way to mark contracts non-ignorable (example syntax is `pre!()` for non-ignorable vs. the current `pre()` for ignorable). A more concrete proposal will be discussed in the winter meeting, so this particular bit isn't set in stone yet.
Mixed mode is about the same function compiled with different evaluation semantics in different TUs, and it is legit. The only case they are wondering about is how deal with inlined functions and they suggest ABI extensions to support it during the link-time. None of what you said is an issue.
> The possibility to have a have a well-formed program in which the same function was compiled with different evaluation semantics in different translation units (colloquially called “mixed mode”) raises the question of which evaluation semantic will apply when that function is inline but is not actually inlined by the compiler and is then invoked. The answer is simply that we will get one of the evaluation semantics with which we compiled.
> For use cases where users require strong guarantees about the evaluation semantics that will apply to inline functions, compiler vendors can add the appropriate information about the evaluation semantic as an ABI extension so that link-time scripts can select a preferred inline definition of the function based on the configuration of those definitions.
The entirety of the STL is inlined so it's always compiled in every single translation unit, including the translation units of third party dependencies.
Also it's not me saying, it's literally the authors of the MSVC standard library and the GCC standard library pointing out these issues [1]:
Legit as in allowed and not an issue as you're trying to convey, ok? I read the paper if that wasn't already obvious from my comment. What you said is factually incorrect.
Not sure I understand what point you're trying to dispute. It's not obvious at all that you read either my post or the paper I posted authored by the main contributors to MSVC and GCC about the issues mixed-mode applications present to the implementation of the standard library given that you haven't presented any defense of your position that addresses these issue. You seem to think that just declaring something "legit" and retorting "you are incorrect" is a sufficient justification.
If this is the extent of your understanding it's a fairly good indication you do not have sufficient background on this topic and may be expressing a very strong opinion out of ignorance of this topic. It's not at all uncommon that those with the most superficial understanding of a subject express the strongest views of said topic [1].
Doing a cursory review of some of your recent posts, it looks like this is a common habit of yours.
I have literally copy-pasted the fragments from the paper you're referring to which invalidate your points. How is that not obvious? Did you read the paper yourself or you're just holding strong opinions yourself, as you usually do whenever there is something to backlash against C++? I'm glad you're familiar with the Dunning-Kruger effect, this means there is some hope for you.
The problem is that violation of preconditions being UB in a hardened implementation sort of defeats the purpose of using the hardened implementation in the first place!
This was acknowledge as a bug [0] and fixed in the draft C++26 standard pretty recently.
> The proposal simply included a provision to turn off hardening, nothing else.
(Guessing "the proposal" refers to the hardening proposal?)
I don't think that is correct since the authors of the hardening proposal agreed that allowing UB for hardened precondition violations was a mistake and that P3878 is a bug fix to their proposal. Presumably the intended way to turn off handling would be to just... not enable the hardened implementation in the first place?
Using #ifndef NDEBUG in templates is one of the leading causes of one-definition rule violations.
At least traditionally it was common to not mix debug builds with optimized builds between dependencies, but now with contracts introducing yet another set of orthogonal configuration it will be that much harder to ensure that all dependencies make use of the same evaluation semantic.
> We are left with only a professional and personal obligation to reemphasize the obvious: Ask what you do know, what you should know, and how big the gap is between them before embarking on creating an IT system. If no one else has ever successfully built your system with the schedule, budget, and functionality you asked for, please explain why your organization thinks it can
translation: "leave it to us professionals". Gate-keeping of this kind is exactly how computer science (the one remaining technical discipline still making reliable progress) could become like all of the other anemic, cursed fields of engineering. people thinking "hey im pretty sure I could make a better version of this" and then actually doing it is exactly how progress happens. I hope nobody reads this article and takes it seriously
reply