In the Linux world and even Haiku, there is a standard package dependacy format, so dependencies aren’t really a problem. Even OSX has Homebrew. Windows is the odd man out.
On the contrary, most Linux distributions use platform-specific global-only packaging formats for C++ libraries, and if anything I think that's holding back the development of a real, C++-native packaging/dependency manager.
As with all things commercial, my neighbour keeps 40 hives and extracts too much honey in the autumn, resulting in desperate hungry bees in the spring that get very aggressive. If he left them more honey (less profits), they wouldnt be as hungry or aggressive. The entire neighbourhood suffers due to the antics of a single owner. Legally, he’s within the council regulations so there is nothing we can do … Its impossible to sit outside from 9am-6pm in April and May. Once there is enough food, they calm down.
Are they somebody else's livestock when they're on your property and they lack any kind of brand or identifying mark?
If they are somebody else's livestock and they're attacking your person en masse, which law enforcement agency does one call? How much self defense is permitted?
In Haiku windowing system, each app window gets its own thread so dialog boxes run in a different thread to the main window and a different thread to the core app. In Linux, all windows share the same message loop thread. A simple port reveals threading issues in Haiku which dont exist on Linux.
To work around this, all window messages in ported apps are marshalled to execute sequentially. Small additional overhead, and the system doesnt spread available threads, so noticably slower.
Compare a native Haiku app with a ported app, one is smooth as ice while the other isnt. Users notice it. This is on many core systems.
> In Linux, all windows share the same message loop thread.
I'm no expert, but aren't you just talking about Xorg here? As far as my limited knowledge goes, there's nothing inherent in the Wayland protocol that would imply this.
There is always the one App which isnt available elsewhere that prevents migration. I’ve been a full time Amiga user, BeOS user, OSX, user, currently I multiboot with Win11, Linux Mint and Haiku nightly, and Windows 11 still gets 99% screen time due to the one App. All other apps I use daily are cross platform.
For everyone its a different App. For me, Visual Studio 2022 and its world class visual debugger that inspects my complex vectors. Sadly, nothing similar (Xcode slow, QtCreator slow, etc).
I do enjoy Haiku the most, but cannot be as effecient when developing embedded libraries. I professionally develop cross platform libs, developed on Win11 but deployed on Linux embedded. Irony.
I’m so frustrated that the default for password fields is “hidden”. The number of times I had someone observing me type a password is < 0.001% of password entries. There is a post it note on most peoples monitors with visible passwords anyway.
Reverse the logic and make a “sudo_h” script which hides the password entry for those rare times you need it.
256Kb stack per Fiber is still insane overhead compared to Actors. I guess if we survey programming community, I’d guesstimate that less than 2% of devs even know what the Actor model is, and an even smaller percentage have actually used it in production.
Any program that has at least one concurrent task that runs on a thread (naturally they’ll be more than one) is a perfect reason to switch to Actor programming model.
Even a simple print() function can see performance boost from running on a 2nd core. There is a lot of backround work to print text (parsing font metrics, indexing screen buffers, preparing scene graphs etc) and its really inefficient to block your main application while doing all this work while background cores sit idle. Yet most programmers dont know about this performance boost. Sad state of our education and the industry.
Actors are a model, I have no clue why you're saying that there is a particular memory cost to them on real hardware. To me, you can implement actors using fibers and a postbox.
I've no idea what the majority of programmers know or do not know about, but async logging isn't unknown and is supported by libraries like Log4j.
2KiB is a peculiar size. Typical page size is 4KiB, and you probably want to allocate two pages - one for the stack and one for a guard page for stack overflow protection. That means that a fibers' minimal size ought to be 8KiB.
You should look into how go manages goroutines. It is indeed 2kib stacks by default without the need for guard pages. They use a different mechanism to determine overflows. Other runtimes can do similar things.
256k is just's just a placeholder for now. The default will get reduced as we get more experience with the draft implementation. The proposal isn't complete yet.
People fixate on stack size, but memory fragmentation is what bites as fiber counts grow, and actors dodge some of that at the cost of more message-passing overhead plus debugging hell once state gets hairy. Atomics or explicit channels cost cycles that never show up in naive benchmarks. If you need a million concurrent 'things' and they are not basically stateless, you're already in Erlang country, and the rest is wishful thinking.
What is more expensive, copying the message, or memory fencing it, or do you always need both in concurrent actors? Are you saying the message passing overhead is less than the cost of fragmented memory? I wouldn't have expected that.
Usually both, but they show up in different places.
You need synchronization semantics one way or another. Even in actor systems, "send" is not magic. At minimum you need publication of the message into a mailbox with the right visibility guarantees, which means some combination of atomic ops, cache coherence traffic, and scheduler interaction. If the mailbox is cross-thread, fencing or equivalent ordering costs are part of the deal. Copying is a separate question: some systems copy eagerly, some pass pointers to immutable/refcounted data, some do small-object optimization, some rely on per-process heaps so "copy" is also a GC boundary decision.
The reason people tolerate message passing is that the costs are more legible. You pay per message, but you often avoid shared mutable state, lock convoying, and the weird tail latencies that come from many heaps or stacks aging badly under load. Fragmentation is less about one message being cheaper than one fence. It is more that at very high concurrency, memory layout failures become systemic. A benchmark showing cheap fibers on day one is not very informative if the real service runs for weeks and the allocator starts looking like modern art.
So no, I would not claim actor messaging is generally cheaper than fragmented memory in a local micro sense. I am saying it can be cheaper than the whole failure mode of "millions of stateful concurrent entities plus ad hoc sharing plus optimistic benchmarks." Different comparison.
Fibers are primarily when you have a problem which is easily expressible as thread-per-unit-of-work, but you want N > large. They can be useful for eg a job system as well, and in that case the primary advantage is the extremely low context switch time, as well as the manual yielding
There are lots of problems where I wouldn't recommend fibers though
Read the article. Satya brought the share price from $35 to $400, that wont kill Microsoft.
I guess what you’re trying to say is that it will kill Windows. But that wont happen since enormous percentage of businesses run Windows ecosystem.
Lets face it, Windows is in maintenance mode, pointless for MS to invest heavily in it since there is no threat for businesses switching to Linux or something else. MS devs primary maintenance job these days should just be scrabling MS Office API every 6 months or so to break Wine and other Linux non-emulators. Wine devs in constand rearrange deck chairs mode, while Win32+Office devs just add a new parameter to an API interface in their 6 month cyclic undocumented API breaking scheme.
You need a better Office than MS Office to break the cycle, and this will be a Web based office / collaboration tool. And guess where MS Azure and Web services fit in this brand new world.
Microsoft dominance aint going away in our lifetimes. Only non US government pressure may force other countries to switch to a flavour of Linux due to US sanctions. Only then can you see a visible migration from Windows. This is a decades long process.
He is definitely killing Windows though. Lack of quality of foundational products do not affect share prices until several years are gone. You will only see an effect in 2029-30, users slowly migrate away and license contracts are slowly dropped. I guess blame will fall on the next CEO and Nadella will be lionized as most successful CEO of Microsoft.
Can Apple marketing please reduce the insane quantity of adjectives in its releases, it has been nauseating to read for decades and sickens me when visiting their sites. Early exit from me and ex-OSX dev for over a decade, wont be back until their core culture changes.
I started with GIMP and still pull it up first before worrying about Photoshop. For me, the "fighting muscle memory" comes from "huh, GIMP does it like this..."
I think it might be muscle memory. I started using GIMP in 2005 or so before the single window mode, and my muscle memory is tailor-fit to it. It felt like an extension of myself.
With GIMP 3, there are a lot of improvements! But it also breaks my muscle memory a lot. GIMP 3 is objectively better, but I find myself opening 2.10 regularly.
reply