This is happening right now, on a massive scale in Australia's NEM (National Electricity Market). Our market operator has recently moved to five minute pricing and settlement windows to accommodate the rapidly changing energy mix throughout the day, as well as rooftop solar; lots of it.
Most rooftop solar installations were installed before home batteries and smart inverters were common, so the market often finds itself in negative pricing during the day, incentivising investment into grid-scale batteries and pumped hydro which can be paid for both storing the excess during the day, and exporting it in the evening peak. Recently, as home batteries and smart inverters have started to become more accessible for home-owners, it is common to join your household onto a 'virtual power plant' with your electricity retailer, who then commands your household battery alongside thousands of others as similar to this article and pays a credit onto your bill. The other interesting development has been the wholesale pricing retailer; not only do they apply the live wholesale price to your consumption, but also your feed in. I know one of them (Amber) can connect into your battery and drive it's charging/discharging in response to the market price. It's not been uncommon to hear people's bills ending up in credit as the capacity of their battery, combined with the price volatility far outweighs the price impact from their consumption.
The volatility poses its own challenges however. Traditional retailers which typically provide flat or peak/off-peak rate have been struggling to economically provide competitive feed-in tariffs to their customers with older roof-top solar systems; systems which are unaware of the pricing environment. Many of their customers have these solar systems which export straight into negative pricing, creating a loss for the retailer that they have to somehow recoup. As far as I understand, they can't legally have a negative solar feed-in tariff, so I assume they're shifting the loss to the consumption tariff. Even though our daytime prices are often below $0/MWh, the average Australian will tell you that their electricity bill is the most expensive it's ever been.
> The other interesting development has been the wholesale pricing retailer; not only do they apply the live wholesale price to your consumption, but also your feed in.
It works the same in the Nordics, you can purchase and sell power at the spot rate.
Our timing is still hourly, but will soon change to 15 minute intervals.
Our power here is still so cheap, and while the pricing is volatile the jumps aren't yet really large enough for battery systems to give positive ROI. At least not for consumers.
For some company to get financing to build a battery it requires some certainty they will not go bankrupt, which means a contract with someone than ensures they will make money. Gambling on the arbitrage opportunity in electricity market rates alone isn’t going to get a 100 million dollar loan.
The last line of your comment is the most interesting to me, electricity is more expensive than ever in Australia.
I've only read the abstract, but hasn't this been achievable for at-least the last twenty years with dependency injection? I've seen time and time again developers conflate microservices with the general concept of service abstraction and separation-of-concerns. These things have existed before microservices, and will continue to exist.
A good DI implementation allows for one to define these business services as interfaces and implementations within an initial monolith, and then trivially reimplement those interfaces against an API or RPC when it comes time to scale. Interfaces can be separated out into a common library which can be referenced in all parts of the system. I am simply curious as to why this has been so staunchly avoided over the past decade of microservices hype.
From lived experience, creating a cross-platform application via abstracting the relevant native interfaces versus targeting web with all the _fun_ that entails are effectively equal levels of pain. Developing rich web apps isn’t free either; I needn’t go into the issues surrounding the revolving door of frameworks, contrived state management patterns that seem to change every year, the total lack of standard controls outside of built-in web elements that look awful which inevitably means that at some point, _you will_ have to be doing DOM manipulation for something that is otherwise built-in on an OS’s native toolkit, etc etc. And then when you put all of this together, you add yet another 500MB Chromium instance onto your user’s desktop and the thing feels like a social networking app.
I’m not going to pretend that there’s currently a better solution because for the current set of constraints everything sucks. .NET has a few up-and-coming frameworks for cross-platform UI but nothing mature like WPF was. Java still produces noticeably slow applications. React Native could perhaps get part of the way there, but as far as I’m aware that mostly targets mobile.
Yep, what's missing is a WebView standard for desktop, like Android and iOS provide. Where these apps are a thin wrapper around a system provided browser engine. And let the user choose the engine (gecko, blink, servo, ...).
The concern is not really around disk but memory, opening just a few Electron app instances is enough to bring a machine with 8GB of RAM to its knees. I’m almost convinced that this is some kind of a planned obsolescence plot.
A standard interface for web views would be great, I’d love to force everything onto a consistent version of Gecko, but really we need a new native cross-platform batteries-included framework with controls and layouts for 90% of standard business scenarios.
You can solve that with such a standard. Like when you're selecting the gecko engine it'll always run a Firefox process in the background, serving not just the normal Firefox browser frontend but also these WebViews. Kind of like ChromeOS does it. Of course every app will get it's own isolated browser process, but that's already the case when you open multiple tabs and it performs great.
No you won't, because it will be the same complains from Electron folks that the provided Gecko isn't the version they actually care about, and how testing is soooo hard when caring for anything beyond shipping Chrome with their application.
I remember reading this site so much when I was eight years old. It's been around for at least sixteen years. Glad to see its still up, looks exactly the same too.
There was a similar installation at the Australian Centre for the Moving Image, Melbourne in 2015. Not sure if it’s still there, at least when I was there last it emailed your video in FLV so I gather the setup was pretty dated.
Not sure how different things are in other parts of the world, but here in Australia all of our debit cards are on the regular Visa/Mastercard networks; you can use them wherever you can use a credit card online. It's just the transaction is likely to decline if you don't have a balance in your account.
I actually find the metaphor quite useful–as a developer I can develop a GUI application utilizing a common menu metaphor but _not_ always have a window open; essentially an application is a 'toolbox', presenting its tools via the global menu, and documents/'work' appear as windows. For context, I used Windows all my life before switching to Mac about six years ago, and I find this makes a lot more sense than one menu per window, where application developers often like to go their own way and implement their own menu metaphors.
Before the inevitable comments regarding Apple's moves with macOS and how the hacker 'tide' is supposedly turning back against Apple, I think I'll chime in with my two cents. I switched from my 2015 MBP to a Surface Book 2 at the end of 2019, and switched back to my same, six-year-old MBP just a few months ago. To keep things short, whilst WSL puts Windows miles ahead of where it once was, I find that the same old rough-edges still remain.
I often dock between standard-DPI displays and portable mode (with the high-DPI/'retina' display.) I first noticed that Windows doesn't correctly handle DPI switching with window borders, Explorer, and notifications back in 2019 (the old 2x scaling level remains when switching from portable to docked), and filed a feedback with the built-in app on Windows. I worked around this by killing dwm.exe and explorer.exe every time I docked my Surface. This issue was still present earlier this year, and deciding I had enough of dealing with all these little Windows 'quirks', wherever they arose, I switched back to my old Mac.
It turns out that SIP and Gatekeeper aren't nearly as much of a problem as I was led to believe, neither of these features have hampered me once. The Big Sur interface changes, whilst I thought I'd never get used to them, have actually grown on me. Since switching back, I've discovered a lot of quality native apps that simply have no analog on Windows–OmniFocus stands out here. And as always, Homebrew still exists and works just as well as it always has for most of my *nix related tasks.
Hearing about the M1 performance improvements, I can see myself staying on Mac for quite some time yet–I'll upgrade to an M1 MacBook Air once this 2015 MBP (six years old!) kicks the bucket.
Edit: For some context, I'm mainly a .NET developer. .NET Core/5 is a game-changer for cross-platform and development is first class on really any system nowadays. I've settled on JetBrains Rider for my IDE and find myself generally happier than I was with VS on Windows.
It's funny that it has been around 9 years now since Apple introduced retina macbooks and their implementation of HiDPI is good from the start but, while it's getting better, windows and linux still need to catch up.
It’s because Apple was very forward-looking in 1999 and decided to use floating point coordinates for the CoreGraphics API. Nothing on the Mac draws using exact integer pixels (since the death of the transitional Carbon API anyway). Scaling was built in from scratch.
The scaling was quite glitchy in 2009 [0], and in 2021, the only allowed scale factors are 100% and 200% — so you don’t need floating-point numbers for it.
If you want to position an @2x view (1 pt = 2 px) you need to use fractional points. A view at (0, 0.5) in points is (0, 1) in pixels. Resolution Independence was a separate concept.
Well, that's the whole thing. HiDPI in and of itself works well. I use it with a 24" UHD monitor and I absolutely love the setup.
But the main issue with Linux, at least with X11, is that it doesn't handle multiple displays with different DPI settings.
As you noted, QT seems to work fine, as do some other toolkits that have dynamic scaling according to the monitor size. Alacritty does this too, for example (don't know what toolkit they use, though).
But GTK in particular is terrible. Yes, you can configure it to use scaling, in which case it works well, IF you don't need fractional scaling (though recent versions of Gnome seem to support it somehow).
But now, if you're going to use, say, a HiDPI laptop with a "regular", larger external monitor, you're gonna have a bad time. If you enable scaling, the widgets will be huge on the LoDPI display. If you disable it, widgets will be teeny tiny on the HiDPI one.
Note that the problem isn't really with X11 (or at least Xorg) since it does provide the necessary information via RandR to perform per-monitor scaling.
The issue is really with window managers and toolkits. What needs to happen is that window managers ask the applications (the top level windows really) to scale themselves -this can be done not just for DPI purposes- and then the applications should apply that scaling.
(applications should expose that they can support this - like they expose that they support handling a 'close window' event - so that compositing window managers can do it themselves for applications that do not support it, which is also why this needs to go through the window manager)
Qt can do this but AFAIK window managers do not do this because there isn't any standard events for that - i remember an email about this exact topic when i took a look at the Xorg mailing list for WMs some time ago, but it didn't seem to be going anywhere (in that it barely had any replies).
So it really is about standardizing how it is to be done. I think the main reason it isn't done is that such setups are comparably rare (i mean hidpi displays themselves are very rare - if you check any desktop resolution stats they often barely are a blip - and having both a hidpi display and a regular dpi one is even less common than that) so developers aren't that interested in implementing such a thing.
One way it could be done however is also pushing the idea that this isn't just for multimonitor setups: as i wrote above, you can use that for generic scaling, which is useful even for single monitor regular dpi setups (if anything it can be very useful for low resolution setups - like 1366x768 and similar which are way more common than hidpi - when faced with applications with a lot of padding, etc).
The issue with Linux is that nothing is part of the system. There's no single standard GUI toolkit. There's no single standard windowing system. There's way too many variations of everything.
For decades, the standard windowing system was X11; and not just for Linux, for other systems as well.
The issue was elsewhere - it provided standardized wire protocol and you could not break it. It could extend it, but applications not aware of the extension would be not able to benefit. Windows and Mac on the other hand kept the protocol proprietary and you had to use supplied libraries, you could not talk to the display server directly. So Microsoft and Apple could update these libraries; on Linux, there was no such option, all the apps using wire protocol instead of client libraries would be left out in the dark.
This isn't really an issue, you just need people to agree around doing some common stuff (even if they do their own stuff elsewhere).
And this is really not that dissimilar under Windows: nowadays you have a lot of different GUI toolkits and applications still need to "opt-in" and explicitly support stuff like scaling, so you have a lot of applications that do not do that.
They don't even agree with what they are supposed to support from freedesktops.org.
On Windows, Win32 doesn't do that by default because it would break backwards compatibility from those Windows 95 binaries being run on a Windows 10 2021 edition, so it must be a conscious decision to enable it.
> Note that the problem isn't really with X11 (or at least Xorg) since it does provide the necessary information via RandR to perform per-monitor scaling.
It is issue with X11. If you have Wordperfect binary from 1999, it still has to work; it does not matter that RandR provides information, old clients will not ask for it and without it, they will be broken.
That's why Xwayland does upscaling by itself.
> mean hidpi displays themselves are very rare - if you check any desktop resolution stats they often barely are a blip - and having both a hidpi display and a regular dpi one is even less common than that
Cause and effect. They are rare, because using them sucks, making them even rarer. On systems, where they do not suck, they are not that rare. This attitude makes entire platform worse off.
> It is issue with X11. If you have Wordperfect binary from 1999, it still has to work; it does not matter that RandR provides information, old clients will not ask for it and without it, they will be broken.
This is why i wrote "so that compositing window managers can do it themselves for applications that do not support it, which is also why this needs to go through the window manager". You do not need X11 support for that, functionality that exist can be used right now to do it - assuming the applications and WMs cooperate to specify the necessary attributes and events, of course.
> That's why Xwayland does upscaling by itself.
You can also do this with a compositor for applications that do not support scaling themselves (which would be the default state - again this isn't any different than the 'supports close event' flag that already exists for decades now). Without a compositor you're out of luck (for the most part)...
...However!
Keith Packard worked on server-side scaling some time ago [0] (not sure if it was merged with mainline) which would allow this to be done even without a compositor (well, kinda, technically the server does the compositing just for the scaled windows). So even without a regular desktop compositor this should still be implementable, assuming that functionality was (or will be) merged in mainline Xorg. But note that this is only for the case where one wants to run applications that do not support scaling and have hidpi scaling and also not using a compositor. The rest can already be implemented with Xorg as-is.
Those patches were never merged, and I doubt they will be, because there seems to be almost no interest in supporting high DPI displays on X without a compositor. If there was interest in those, at the very least somebody would have to adapt them and figure out how to use them with XWayland. In my opinion, any patch that doesn't fix this for Xwayland/Xwin/Xquartz is probably not going to fly. If you want this, you actually do need some level of protocol support for this, and I want to address a common misconception:
>X11 provides the necessary information via RandR to perform per-monitor scaling
This is not true. If it were true, it would be significantly easier to get DPI scaling to work on X11 and XWayland already. If you're referring to the physical size measurements, there are numerous problems with those, and they should never be used to do DPI scaling. If you're developing for Wayland, you should just ignore those measurements and use the scale provided by the server. If you mean that applications can draw themselves at arbitrary scales, they could always do that, without even bothering with RandR, but that will break down when you try to handle the compositing cases, e.g. stretching a window across monitors and expecting the display/input coordinates to scale correctly. So regardless of what you're trying to get the apps to do, some more work needs to be done there.
Even if those problems were fixed, DPI scaling would still not work without more changes in the server. The major reason why it can't even work is because of input coordinate scaling -- the server needs to be able to scale coordinates based on a factor provided by the client, and RandR does not provide this, and it doesn't make sense to add it to RandR either. If you want to standardize how this is done, the best way would just be to copy the method used by Wayland back into the X server. Part of it might be merging those patches by Keith that you linked, part of it might be somebody finishing this MR: https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests...
> Those patches were never merged, and I doubt they will be, because there seems to be almost no interest in supporting high DPI displays on X without a compositor.
I don't know about that, though the only person i personally know with a HiDPI display uses X without a compositor (they just use a theme with everything being big).
> If there was interest in those, at the very least somebody would have to adapt them
These are only part of the puzzle, they are not really that useful by themselves without the rest of what i wrote, which would actually be the more involved problem. Having scaling without a compositor is more of a "cherry on top" than anything else.
> This is not true. If it were true, it would be significantly easier to get DPI scaling to work on X11 and XWayland already.
I think mixing DPI and scaling is the root of the problem here. Scaling is something independent from DPI settings, though DPI settings can be used for setting up the defaults.
However scaling is useful regardless of DPI.
> If you're referring to the physical size measurements, there are numerous problems with those, and they should never be used to do DPI scaling.
Why not? They can be used to calculate defaults. I've heard that by default the official X server misreports DPI, but many distributions apply patches to fix that - not sure why the X server does this but it is something to be fixed. Personally i never had issues with how the X server reported my monitors' DPI and i've used a bunch of different monitors over the decades.
Regardless this is something that can be fixed anyway - after all if Wayland can get the proper information, so can X11.
> If you mean that applications can draw themselves at arbitrary scales, they could always do that, without even bothering with RandR, but that will break down when you try to handle the compositing cases, e.g. stretching a window across monitors and expecting the display/input coordinates to scale correctly. So regardless of what you're trying to get the apps to do, some more work needs to be done there.
The idea is for RandR to be used by the window manager as an initial/default scaling setup for windows per monitor. The users should be able to alter that of course and even alter per-window (i mean toplevel window here) scaling (via their window manager).
> Even if those problems were fixed, DPI scaling would still not work without more changes in the server. The major reason why it can't even work is because of input coordinate scaling -- the server needs to be able to scale coordinates based on a factor provided by the client, and RandR does not provide this, and it doesn't make sense to add it to RandR either.
IIRC this was actually fixed some time ago to allow scaling via compositors. Outside compositors we're back to Keith's patches. But this is also an issue for applications that do not support scaling.
Basically what i mean is this, to summarize it:
* Applications can either support or not scaling. If they do support scaling they set some attribute to their toplevel windows to indicate that.
* The Window Manager sends events to toplevel windows that support scaling to setup their scaling whenever the scaling changes (the window is dragged to another monitor, the user requests a different scale level via an icon menu or shortcut or whatever). If a window doesn't support scaling (note that an application can have windows that both support and not support scaling, not that it matters much but it is a good idea to avoid associating application support with window support) it gets scaled by either the window manager (if it is a compositor), a dedicated compositor (some compositors can work with other window managers) or the server (with Keith's patches).
There could also be support for toolkits to do the scaling themselves if the window manager doesn't support scaling (e.g. window managers that support scaling can use an attribute in the root window to indicate that support), but that is more complex and may not be really necessary.
Note that only the case where an X application doesn't support scaling and a compositor is not running is where the current server functionality isn't adequate. Everything else should already be there, but it needs support from toolkits (to support the relevant events and of course scaling their contents) and window managers (to implement scaling events). Perhaps RandR's DPI info isn't reliable (though that hasn't been my experience) but that can be fixed, the harder bits are getting toolkits and window managers to support that stuff.
I mean in terms of developer interest, I haven't seen anybody working towards getting those patches (or similar things) merged. It would be easiest to get this working in a rootless server (like XWayland) first and then go backwards from there. In the case of compositing on multiple monitors, scaling isn't independent of DPI settings. They are the same problem, because at some point you will get an app that is mismatched and you'll have to scale it along with its input coordinates.
The physical measurements could be used to inaccurately guess defaults, some Wayland servers might do that, but it's not preferred. The X server doesn't "misreport" DPI in the sense that it's purposefully sending a wrong value, there's a process that it does (I may not be remembering this exactly) where it tries to read from the EDID, which could be wrong values. If the values are there but wrong those will just get quietly passed through, if those are zero then it then tries to fill in measurements based on a default DPI. Random X clients with access to RandR can also change the measurements stored in the server for whatever reason -- basically there is no possible way for this value to always be correct if you just want a number that tells you what DPI to target.
You're right this could be fixed, just like every bug could technically be fixed if given enough attention, but somebody needs to take the time to backport the Wayland method to X, which I don't have high hopes for it happening any time soon. The main thing you're missing in the server, as I said, is scaling of input coordinates. The compositor or the toolkit can't do this, it has to happen in the server, because those events are sent directly from the server to the client. It's probably a better approach to get that working, and then put a prop on the root window, and then hack kwin/mutter/whatever to use that, but I don't think anyone has tried to implement it that way yet.
> i mean hidpi displays themselves are very rare - if you check any desktop resolution stats they often barely are a blip
Do you have a link/cite for this? I couldn’t find easily somewhere summarizing pixel density stats.
“Desktop resolution stats” is not the same thing. All the retina displays have at least a 2x pixel ratio so these tables on a cursory Google search are clearly lumping 13 inch MacBook pros into the 1280x800 bucket for instance.
You got a reference to something that clearly is accounting for pixel ratio?
I can tell from a number of reports available that basically have no resolutions higher than 1920x1200, when we know there are enough retina displays out there that they should be listed.
As to the linked report. It’s hard to discount bias with a browser with 3% market share that looking further seems to be overwhelmingly installed on outdated Windows 8 machines
What exactly is wrong with "scaling" on Linux? I use Qt apps on Plasma and everything just works. Nearly all my displays are HiDPI now.
If you use 2x scaling (or other integer scaling), everything is fine. If you need fractional scaling, things break down pretty quickly, at least on GNOME.
Once you use fractional scaling (e.g. 1.5), GTK applications will scale correctly, but are slightly slower and consume more mattery. However, all XWayland applications are unusably blurry.
Unfortunately, fractional scaling is really needed on 14" 1080p laptops or 4k e.g. 27" screens. With 1x scaling everything is tiny and with 2x scaling gigantic.
The only workaround that worked ok for me was using 1x scaling and using font scaling in GNOME. Many controls and icons are tiny that way, but at least text is readable and not blurry. Of course, this only works up to some extend and when both screens need the same amount of scaling (since font scaling cannot be configured per screen).
On a non-HiDPI display (I use one that is 1680 by 1050 and one that is 1280 by 1024) fractional scaling works fine on my machine running Gnome 34 with apps that bypass XWayland and talk directly with Wayland (which is true of all the apps I use). (Google Chrome needs to be started with a couple of command-line switches to bypass XWayland.)
But your comment makes me hesitate to buy a HiDPI monitor!
I guess this is the issue with saying "Linux" supports or doesn't support things. My software stack of Xorg/Qt/Plasma works fine. Your stack of Wayland/GTK/GNOME doesn't. Should we hold it against "Linux"? Is that a fair comparison to MacOS, where everything uses Cocoa? For that matter, how do GTK apps look on MacOS?
They’ve had wonderful trackpads for even longer and PC laptop OEMs still have yet to catch up. No, I’m entirely uninterested in the touchscreen gimmick of the week.
Funny enough I did a bunch of projects that required pushing like 16K output (video wall, not a personal setup) and at that point Windows becomes the only option because it requires high end GPU and an array of outputs.
I feel like Apple's approach to hardware made their job much easier -- they just ensured that every retina Mac had a display with exactly 2x the pixels as its pre-retina equivalent, so they never had to deal with fractional scaling like Windows often does.
They deal with lots of fractional scaling - at first they may have done simple pixel doubling but now the screens are often at non integer ratios.
In fact you can speed it up slightly by making sure the display resolution is exactly “half” the actual resolution so to prevent MacOS from scaling 8x internally first.
The article clearly states that macOS always draws 2x and just displays the result, individual pixels be damned. Anyone can confirm this simply by taking a screenshot and looking at the resolution (screenshots are always the size of the underlying framebuffer)
Windows has a particular knack for giving the user the impression it doesn't care about them at all. The things that irritate me now are the same things that irritated me last year, five years ago, and in a few cases twenty years ago. They make you go "Jesus is everybody asleep over there?"
For example, if you want a notification icon to always be shown, you have to open a menu, open another menu, scroll to find the one you want, open an option list, and finally hit "show icon and notifications". Why can't you just right click an icon and hit "show icon and notifications"? Because Windows doesn't give a shit about you.
All the ads, the file search that searches Bing and can’t be disabled, the telemetry that sends the websites you browse to Microsoft. There must be some product manager over there whose bonus is based on how much ad revenue they can get out of Windows, no matter the cost. Installing the Windows 11 preview is just depressing, like you say it really makes me feel like they don’t care and expect people just to take it.
For your specific notification icon issue, I thought you can just drag the icon from the pop up tray to the main system tray to keep it always visible?
The insanely deep nesting of little configuration windows is probably the number one thing that makes Windows feel old to me. The new settings app is an improvement, but sadly one that lives in isolation.
And often that's because MS decided not to replicate some capabilities of the old WinNT-vintage configuration GUI in the pretty new touch-friendly interface. Probably the classic example is the trek you have to make, through two or three dialogs of nice-looking but most often useless new GUI, to a cryptic "Change adapter options" button and on to where Control Panel\Network and Internet\Network Connections survives, still the vital dialog and still largely in its original late-'90s form. Oh, and then there's the decision that having separate default devices and default communication devices for audio is too nerdy, so hip young Windows 7 kids don't need to see anything about any default communication device in the shiny new sound GUI. Hilarity then ensues when the VR headset somehow automatically becomes the default-comms-device microphone and you mysteriously become barely audible on voice calls. Of course other big, important sections of the GUI (file permissions?!) seem to have been junked completely without any replacement.
Those settings windows are part of the API and windows has to provide them for legacy software in many cases.
What was absolutely insane was not replicating the functionality into the new control panel setup leading to power users feeling dumb because they can’t find network settings under network settings.
Or the point that many of these configurations can ONLY be accessed by GUI. That's one thing that makes windows (and even MacOS to some degree) so awkward to use sometimes.
Almost every setting in MacOS can be accessed by the command line, and many things only there. Actually I don’t think I’ve found anything so far that can’t be set from the command line and I do a fair amount of that kind of tweaking.
Xcode license agreements? Took me literally more than a hour the first time I installed something on the terminal but refused to finish because I never heard of xcode or that you need to open it manually in order to confirm a license agreement. When I was just installing the ruby stack and it needed some compilers via xcode I guess?
That was on a MBP like 2015 or so. The error message was not helpful at all, I had to ask someone with experience or I would still be installing ruby to this day.
> For example, if you want a notification icon to always be shown, you have to open a menu, open another menu, scroll to find the one you want, open an option list, and finally hit "show icon and notifications".
Or, you know, just drag and drop it into the always shown area.
Window 11 goes a long way towards fixing this. There's still multiple ways to change things, but all of the touch screen settings are gone and the new settings app is laid out logically.
"switched from my 2015 MBP to a Surface Book 2 at the end of 2019, and switched back to my same, six-year-old MBP just a few months ago. To keep things short, whilst WSL puts Windows miles ahead of where it once was, I find that the same old rough-edges still remain."
I had a similar feeling when given a new windows laptop in 2020 to work with that was supposedly better spec'ed than my late 2013 macbook 15". The problem has been windows and the lack of finding flow in it's OS. I've been trying to document workflows & apps here for the past year to see if alternatives are emerging: https://docs.google.com/spreadsheets/d/148zTJUwfVv9xfDcpSoH3... . For me it's not so much the development environment but whether I can quickly deal with administrivia tasks (organizing files, copying things from one form to another, etc.). I'm surprised windows 11 still lacks miller columns: https://en.wikipedia.org/wiki/Miller_columns
My concern about the apple platform going forward isn't so much about the technical, but the material - my 2013-era macbook died recently and found the fact that I could remove the ssd and plug it into a similarly old mac to still be a savior , despite having multiple backups. It's driven me to consider installing linux on an intel windows tablet over buying an M1 macbook - I'm getting leary of both the lack of repairability and really wanting a tablet on stand + detached keyboard/trackpad for sake of ergonomics.
> The Big Sur interface changes, whilst I thought I'd never get used to them, have actually grown on me.
That’s the thing with some people and Apple; I generally moan whenever Apple changes their UI substantially (the change to the “flat” iOS comes to mind first), but soon I come to wonder how I ever liked anything else. :)
They also (intentionally or not) make many of the changes “reversible” with tweaks and other programs. This allows the people who have a visceral reaction to revert and usually over time it softens, and then you find yourself not bothering with it.
The biggest change for me was the switch to zsh which ended up in me changing my default shell everywhere else.
> how the hacker 'tide' is supposedly turning back against Apple [...] To keep things short, whilst WSL puts Windows miles ahead of where it once was, I find that the same old rough-edges still remain.
This feels like a bit of a false dichotomy, WSL isn't the only alternative.
We are so very far from limited to windows vs mac for desktop today, it's easier than ever to use Linux or even a BSD as a daily driver on a PC laptop. Even the article in 2005 noted hackers switching to intel boxes (not mac intel boxes) for FreeBSD and Linux at the time.
Well, if you are self-employed maybe. Or for your own personal home machines, sure. But most employers with IT departments wouldn't be too happy if you had a "nonstandard" OS on your work machine because they wouldn't know how to secure it. Heck, many of them don't even allow Macs.
I know what you mean, and that is probably still true for the majority, for the US at least, and in large corporations, but I think the world is changing.
Linux is the standard OS at my work. We use it on servers, so it makes sense to use it on the desktop too for maximum dev familiarity and compatibility, since it now works well there... it's even started to be used by some of the non-devs to reduce windows/mac update fatigue and general performance issues even though they only need office apps... I'm at a small company but I know of other large companies with big IT depts applying the "locked down OS image" approach to Linux for desktop too.
I find the concept of "windows" as the only safe OS image IT depts are willing to use, as fairly antiquated. Much like IE used to be the only browser you were allowed to use in many big old corps and gov agencies for "security" reasons.
Although this is an aside, the original context of the article was only "hackers" machines, and whatever your interpretation of that is, it probably doesn't involve fortune 500s.
We use it on servers, so it makes sense to use it on the desktop too for maximum dev familiarity and compatibility, since it now works well there...
I am always surprised by these things. I mean, even trying to use something widely-used (nowadays) as Zoom, basically gives me two options (on a HiDPI screen):
- Use X11, but when I share a window and use drawing tools, whatever I draw is displaced by a seemingly constant offset. It's caused by scaling for HiDPI, because if I disable scaling, things work.
- Use Wayland, but I can only share the whole screen. Drawing works, but the overlay is incredibly flaky and often incorrectly registers clicks.
Besides that on both Wayland and X11, doing video call makes the fans blow at full speed and quickly drains the battery when a laptop is not hooked up (probably no hardware acceleration of video encoding/decoding). Oh, and I probably want to use a wired headphone, because using a BlueTooth headsets regularly dropout.
Zoom is just a single example, but there are just so many paper cuts if you want to work outside a solo developer context.
(Windows has a lot of issues as well, but at least basic workflows generally work.)
Zoom does seem to be a piece of crap, but it lets you use the browser which seems to be less bad on linux. At my work we all use google meets and only zoom for some external customers, the experience seems to be much worse with zoom in general regardless of the OS - that said it seems to work ok on Linux in the browser - once you actually get into the meeting, that's the hardest and worst UX experience ever (getting into the meeting).
I haven't used HiDPI monitors on linux yet, being able to use linux easily is more important to me for dev, it's a known area of weakness, and I just don't need the extra pixels for dev so it's easier to just avoid it. If you are doing graphics linux is probably not the best OS for the job tbh, and that's a shame but just the way it is currently.
> Zoom is just a single example, but there are just so many paper cuts if you want to work outside a solo developer context.
I suppose it depends upon how much crap software the workplace forces upon their workers, I understand that it's often mandatory to use a bunch of proprietary software for various types of communication and management stuff.. i.e stuff that's not central to actual dev work. If you can get away with a browser, then it's not bad at all... and when there is an option for a native version, use the browser, because usually it's some horrific electron thing that's even worse.
At my work we all use google meets and only zoom for some external customers, the experience seems to be much worse with zoom in general regardless of the OS
I strongly disagree. Zoom on macOS and macOS works very well (I currently use it for a remote course). My experiences have been far far better than Teams, Google Meet, Skype, Jitsi etc. Most of which I have had to use for work or private. My wife is hoping that her employer will relent sometime and offer Zoom, because she prefers it so much over Google Meet, whatever Blackboard uses, etc.
It could be that your experience is colored by the Zoom web interface, which I only used once on Linux and it was terrible then.
I haven't used HiDPI monitors on linux yet, being able to use linux easily is more important to me for dev, it's a known area of weakness, and I just don't need the extra pixels for dev so it's easier to just avoid it.
For me it's one of those things that can't be unseen. I have happily used lo-DPI screens for decades, but since I can't really stand lo-DPI screens anymore since I had my first retina MacBook. Even though I am doing development 80% of the time, I want my fonts to be crisp.
If you can get away with a browser, then it's not bad at all... and when there is an option for a native version, use the browser, because usually it's some horrific electron thing that's even worse.
I guess that's true on Linux. Unfortunately, very few web apps beat native applications like OmniGraffle, the Affinity Suite, or even PowerPoint.
ok, but all of your use cases and priorities are so far away from the original premise of the article that's it's clear your needs are more inline with a regular mac or windows user than a "hacker" or dev... (just saying) it's not for everyone.
Also which video conferencing app "is the worst" seems to be pretty subjective, but tbh they are all pretty crappy and replaceable, i'm not really sure what this has to do with OS choice.
> i.e stuff that's not central to actual dev work.
For developers that only care about POSIX CLI and daemons, others from us actually don't care about it and do graphics and GUI programming, where the screens and GPGPUs play a big role.
Same here, 10.6 was my last MacOS, it felt like it lost direction and focus after that.
For a 2009 MBP it was a nice experience trying out FreeBSD and various linux distros before settling on Debian. The only stumbling block on that machine was the macEFI, which there were eventually plenty of solutions for - I hear the later machines are a bit of a nightmare to run anything other than MacOS though.
> It turns out that SIP and Gatekeeper aren't nearly as much of a problem as I was led to believe, neither of these features have hampered me once.
They can also both be turned off! Gatekeeper in 30 seconds and SIP in less than five minutes. I legitimately understand why people get worked up about optional features.
I’ve made a serious effort to switch away - using a retina Mac pro with Ubuntu, and a bunch of raspberry PIs.
I love Linux, and I love the PIs, but it can’t see any reason to stick with the Ubuntu desktop. When a pro level M1 or M2 iMac is released, I will be moving back.
> i can’t see any reason to stick with the Ubuntu desktop
Then don't. Linux has hundreds of alternative desktops that you can install and try out in minutes. KDE, i3, Awesome, bspwm, Openbox and many many more are all at your fingertips.
It’s all frankly half baked and clunky, and the time investment in configuration is insane.
Yes, you can find individual features that are better than MacOS spread across them, but you can’t get good design, aesthetics and features in any single one of them.
You can certainly get by with it, but why would I want to just ‘get by’?
Oh yes so much this. 20 years here being told that today is the day of Linux on the desktop. Like hell.
Apple stuff is consistently good. It's not perfect but there's not really anything that kicks you in the balls every day to the point it makes you want to burn all your technology and live in the woods. About 5 minutes with any recently Gnome release on a Linux laptop makes me want to do that.
Linux is always 50% done. That last 30% to get it to Apple's 80% done is boring so they just rewrite everything again or fork some new clothes for the emperor.
The worst experience is windows. There's several layers and each one gets to about 33% done and is replaced with a new one. All the old layers sit there like rings in a tree, occasionally having to be gouged out to fix an obscure issue.
My approach with Linux has been to not even try to make the Linux desktop try and compete with macOS or even Windows. Instead, I really like the "OS as IDE" approach, which has caused me to settle into using most of the Suckless stuff -- dwm, st, dmenu. Live in your terminal and keep each application's function and jurisdiction as small as possible, and Linux is really great. For programming. Only.
You'd probably be surprised to hear that I mostly agree with what you're saying. But here's the difference: MacOS will never feel like home to me. I've used it for months, and tried my hardest to make it's workflow feel natural. Every time I tried to rationalize a concern with the OS though, I'm buried by workarounds/paid apps/subscription services that will supposedly fix the issue for me. Unfortunately, that makes the last 20% of MacOS feel so frustrating to fill: you're always taking one step forwards and two steps back, fighting against a company that wants to take control away from you with every update.
On Linux, I just have a script that I run that sets everything up the way I want it. I'll admit, it took a few hours to make (and has received a few updates over the years), but it feels much closer to that "100%" mark than a fully-customized Mac to me. Plus, I'd rather not rely on the whims of yet another "move fast and break things" company jockeying to take over more and more of my digital life. I'm good here.
It's a two way street. Why would I want to "get by" without a package manager, or "get by" without 32-bit libraries, or "get by" without a functional graphics API? I'm the user: the choice should be mine. If Apple doesn't present that choice, I don't consider Apple an option. Simple as that. As a developer, these points are non-negotiable. Apple has built computers for 30 years now, they should be well aware of that.
> Why would I want to "get by" without a package manager,
You don’t have to on MacOS.
> or "get by" without 32-bit libraries
Why would you want 32 bit libraries? 32-bit support was also ditched in popular Linux distributions (Ubuntu, Fedora, Red Hat) in the same year as MacOS.
> or "get by" without a functional graphics API?
To claim there is no functional graphics API on MacOS is utterly absurd.
Years ago I started using Macs to have a stable Unix desktop that also played well with commercial software while keeping Linux on the side as a hobby. Somewhere around 2010 or so I think finally switched away from Macs to pure Linux after my Mac that worked just fine was dropped from support of the latest version of OS X and by that point Linux on the desktop seemed far more stable and I was working with it much more professionally any way. Linux started getting really good as a desktop OS and I think it peaked for me around Ubuntu 16.04 and got very much to a "just works" state that I enjoyed with Macs and Steam was on Linux along with a handful of games I cared about(mostly Civ 5). Then a weird thing started happening and new updates on the same hardware started having weird issues on my laptop, mostly around sleep and power saving. It was possible to fix most of the issues, but the brief "just works" periods seemed to be over. It was still mostly fine on my desktop, and even improved when Valve launched Proton.
Eventually I used my Linux desktop mostly and later my Linux laptop went away in favor of a Chromebook, which supported their beta Linux environment. It wasn't perfect, but sleep/suspend worked and I could do most "Linux" things I needed to do. I also started using a Macbook Pro at work again after being off of OS X for 5 or 6 years and it was just a reminder to me that a Unix-like desktop that didn't need as much fiddling around existed(of course some of you may point out that the fiddling is more restricted on OS X, which is a fair point as well), but the thermal issues of the later Intel Macbooks kept me from buying one for personal use.
So fast forward to last year. I was moving and some other stuff was in flux and my already long in the tooth desktop needed to be put away for a while and while my Chromebook was useful, I didn't like relying on it as a primary machine. Since I didn't play too many demanding games anymore I thought this would be a good chance to kill two birds with one stone and get a laptop that capable playing the meager game library I had, which meant mostly better integrated graphics rather than discrete, and since it was before Intel Xe came out, a Ryzen 4000 series CPU seemed like the obvious choice with a GPU that was roughly GTX 1050ti class, just enough for me. Sadly it appears that AMD pushed forward with some new kind of "hybrid sleep" or what ever they're calling it and wast supported by Linux very well at all(I'm not sure of the exact state at the time of writing this, there was some talk of support landing in 5.13, but I'm not sure of that happened or if it got pushed out to 5.14). So I did the Windows thing full time for the first time in a very long time with WSL, which I actually grew to like somewhat, but I still wasn't happy with the laptop.
By the end of the end of 2020, Macbooks with the M1 were in the wild and the results seemed very good. And since I was still using a Macbook Pro for work during this time, I still had mostly positive feelings about macOS. It had its peculiarities, but I liked it better than Windows, and required less effort to keep running than Linux, I just didn't like the jet engine that my Macbook turned into when ever I tried to do anything remotely demanding. In fact I've always hated fans, and fan noise was a one of motivating factors in dumping my desktop. My "grail" machine has been a fast laptop with no fans or other moving parts. SSDs have been a thing for a while, but we've not had a rally solid performing fanless machine until the M1 Macbook Air(in the 21st century, I'm not counting the fast for their time fanless machines of the 80s and early 90's). So I felt almost obligated get that machine and that's what I did and haven't regretted it for a second(though ask me again in 4 or 5 years if/when Apple starts dropping support for these first gen ARM devices). It can silently keep up with any work I would do compared to my work 16" Macbook Pro which runs much hotter and louder and so far the M1 Air has been able to do what I've needed it to do.
Switching to windows and expecting an improvement from MacOS is silly. I switched to Linux from MacOs in 2015 and never looked back. I cam dual boot into Windows, but do so less than once a year these days.
This is why Linux exists. I was pretty disappointed by the compatibility/performance of the M1, so I switched back to my desktop with Manjaro Linux installed on it. Suffice to say, I doubt I'll be using my Macbook unless Apple revives 32-bit libraries or switches back to x86 in some capacity.
I'm glad that your MacBook Pro didn't get bricked by the Big Sur update! I heard that there was some issue with the HDMI port firmware, so haven't updated my 2014 MBP.
I have had major issues with trying to replace the battery. Beware of third-party replacements, which idle at 12V. The original battery idles at 2.2V and "wakes up". A voltage spike when waking from battery killed my logic board, and 3 professionals have failed to repair it so far. I've spent over NZ$1900 on 5 replacement batteries to try to get something that works, since September 2020. I just want a laptop that I can use on the bus, with a removable SSD so I won't lose my data when it breaks, and iTunes so I can sync my iPod and iPhone.
Instead of looking for a removable SSD consider setting up some kind of syncing and backup solution. A removable ssd doesn’t protect you from theft or ransomware. I use a combination of time machine, backblaze and syncthing to make sure all my data is synced across my machines and backed up locally and in the cloud.
A removable SSD does ensure that my laptop doesn't become trash after I use it long enough. For many people (myself included) no computer can truly feel "perfect" unless it can be run in relative perpetuity.
The music app in Big Sur still supports syncing to old iPods, I used it with an old 30-pin pre-video iPod nano that I found in a thrift store. It could even properly reformat it from windows FAT32 to Mac HFS format
I must say, out of all the things to happen on a Thursday night I wouldn't have expected to read an article where someone referenced my own code that I wrote when I was 15. (5 years ago now... time flies) Here's the file I presumed he looked at; https://github.com/nvella/sdvr/blob/master/pk.c
I was trying to reverse-engineer my parent's network CCTV DVR so I could hopefully integrate it with Home Assistant - as far as I'm aware the sluggish smartphone apps are still the only way to access those boxes. I wasn't ever able to get as far as George did with his IP cameras; I hit a snag on trying to correctly reassemble the H264 streams, so all I ever got out of it was mostly corrupt still frames.
Cheers :) Yeah, I was one of those kids that spent most of their free time programming or otherwise tinkering with computers. I started with simple systems scripting languages before moving to Ruby and getting a proper grip on basic OOP stuff, C then shortly followed.
I don't really have any use for C today, but it definitely taught me a lot. It was intimidating at first, but when the concept of pointers finally clicked I felt like I had a sense of control in the language, and things were relatively deterministic and predictable.
I did most of my projects in C for a few years, but eventually got sucked into the JS ecosystem like pretty much everyone else at the time. These days I mostly work in .NET - C# is constantly evolving, and with .NET Core it's seemed to strike a good balance between powerful tooling, cross-platform support, performance, and just fun language features. I'm pretty content for now :)
Amen! It's really touching when someone finds a bit of code that you thought would never be interesting and uses it! I've actually started a few friendships by asking questions, reporting bugs, writing random thank you notes on little code things. It's special to be reminded that you're part of the general network of humanity doing neat things, even if many things go unnoticed. :)
Most rooftop solar installations were installed before home batteries and smart inverters were common, so the market often finds itself in negative pricing during the day, incentivising investment into grid-scale batteries and pumped hydro which can be paid for both storing the excess during the day, and exporting it in the evening peak. Recently, as home batteries and smart inverters have started to become more accessible for home-owners, it is common to join your household onto a 'virtual power plant' with your electricity retailer, who then commands your household battery alongside thousands of others as similar to this article and pays a credit onto your bill. The other interesting development has been the wholesale pricing retailer; not only do they apply the live wholesale price to your consumption, but also your feed in. I know one of them (Amber) can connect into your battery and drive it's charging/discharging in response to the market price. It's not been uncommon to hear people's bills ending up in credit as the capacity of their battery, combined with the price volatility far outweighs the price impact from their consumption.
The volatility poses its own challenges however. Traditional retailers which typically provide flat or peak/off-peak rate have been struggling to economically provide competitive feed-in tariffs to their customers with older roof-top solar systems; systems which are unaware of the pricing environment. Many of their customers have these solar systems which export straight into negative pricing, creating a loss for the retailer that they have to somehow recoup. As far as I understand, they can't legally have a negative solar feed-in tariff, so I assume they're shifting the loss to the consumption tariff. Even though our daytime prices are often below $0/MWh, the average Australian will tell you that their electricity bill is the most expensive it's ever been.