Hacker Newsnew | past | comments | ask | show | jobs | submit | wronglebowski's commentslogin

That also comes upstream from llama.cpp https://github.com/ggml-org/llama.cpp/discussions/4345

Props to the author for putting in what looks like ton of work trying to navigate this issue, shame they have to go to these lengths to even have their case considered.

I went to hell and back trying to get PIP/PBP monitors on my 57" g9 ultrawide to work with my M2 pro. ended up having to use a powered hdmi dongle, displaylink cable, and displayport, with 3 virtual monitors via betterdisplay. Allowing resolutions outside of macs limitations setting in BD is what did the trick. I don't envy OP. Having 5120x1440 @ lodpi was the worst, just ever so slightly too fuzzy but perfect UI size but eventually got a steady 10240x2880 @ 120hz with HDR. I literally laughed out loud when I read the title of the thread. Poor guy.

You may be able to get this working using PBP and 2 cables without virtual displays. This is my write up for using HiDPI@120hz for two 57” G9s on my M2 MacBook. https://www.reddit.com/r/ultrawidemasterrace/s/VrBLFDxYzg

Ah but you see, the challenge is to get a 3-split PBP on an M2 pro on a monitor with a native res of 7680x2160, each one scaled down 33%, working at 120hz with HDR, all hidpi like so:

  ┌─┐┌────┐┌─┐
  │ ││    ││ │
  └─┘└────┘└─┘
It creates some wonky math and requires plenty of dock and cable shenanigans and unlocking resolutions above 8k via BD. It's the third "monitor" where it gets tricky with the M2 pro especially at these resolutions.

Fascinating. What's that gain you over using the monitor's native resolution full screen vs PBP mode?

I hate spending any unnecessary clicks or keyboard shortcuts on getting whats out of my head into the computer. I used yabai before primarily, now using aerospace. Since the monitor is super ultra-wide (57 inches with a very high DPI) the native resolution makes everything ultra small to my eyes. It's the same height as my 34-inch Samsung G5s which are 1440 pixels tall natively, but since this one is 2160, it would have to be 1.5 times larger physically to look decent at native res especially on macOS. The only other option is to scale the UI 1.5x which is where all the problems begin.

I like the three-column separate monitor layout because I have hotkeys, primarily driven by my mouse but also usable keyboard-only where I can easily switch between monitors with `⌘+`` which moves my cursor between them. I can select whichever monitor I want and put my mouse to it, and I can switch to any workspace on any monitor quickly. I also have hotkeys that sync three workspace numbers across monitors, so switching between them switches all AeroSpace workspaces on all three monitors simultaneously. If I have five projects going, I'd have the terminal on the left, Linear and other communication tools on the right in accordion mode with AeroSpace, and I can use my mouse or keyboard exclusively to find exactly what I'm looking for almost as fast as I think of it. I spend zero time on window management or organization now so it makes it thoughtless to use.

If I'm just using the monitor's native resolution there's no real way to do portals — having two apps open as sticky and only switching a portion of the monitor space to a different app while keeping the other sticky. There are hacks you can do with AeroSpace, especially since AeroSpace doesn't use native macOS Spaces, but the three-monitor layout is a much more robust approach in my opinion just a bit of a nightmare to setup. Theres a million little mac annoyances you have to fix.


...And then there is the near-infinite trickle down of apps that rely on apps that rely on arcane configs and so on. This is truly the OS from hell. At least with Windows you know it's going to be garbage so when anything works on any level you are maximally impressed. But I have to spend my weekends isolating window shadow disabling functionality from yabai into it's own binary because I switched to aerospace which requires 'displays have separate spaces' to off, which just so happens to be exactly what yabai requires to be on, to remove window shadows, which is the only use I have left for it.

Just like the excel world championship I would find a macos ricing/window tiling competition equally enthralling. You read articles like the OP and at some point all you can do is laugh because lord (Cook) knows you've cried.


betterdisplay is a life saver

Thanks, it was a good portion of my weekend bashing my head against the keyboard trying to figure out what was going on and if there was a workaround I could use (there isn't that I've found).

The post reminded me how I investigated a similar issue having no idea. Using Claude or GPT to investigate this kind of hardware issue is fast and easy. It gives you next command to try and then next one and you end up with similar summary. I wouldn’t be surprised that author didn’t know anything about displays before this.

The jump from 10 to 20$ a year*(correction) is fine? I’m happy to pay for a quality service I use daily, and price increases are inevitable. Zero issues on my end.


I think you meant $20 a year. Every known cloud has increased their prices, so personally I saw it coming.

https://bitwarden.com/pricing/


20$/year. And family plan is 48$/year (max. 6 users), meaning 8$/user/year. If you find 2 people to share the family account with its cheaper than single account.


I don't doubt it, but what were they all doing? The Metaverse had 10k employees on it for multiple years and seemed to almost be a standstill for long periods of time. What do these massive teams do all day?


Have meetings to figure out how to interact with the other 9990 employees. Then try and make the skeleton app left behind by the team of transient engineers who left after 18 months before moving on to their next gig work, before throwing it out and starting again from scratch.


Exactly. What Meta accomplished could have been done by a team of less than 40 mediocre engineers. It’s really just not even worth analyzing the failure. I am in complete awe when I think about how bad the execution of this whole thing was. It doesn’t even feel real.


Actually I would like see a post-mortem that showed where all the money actually went; they somehow spent ~85x of what RSI has raised for Star Citizen, and what they had to show for it was worse than some student projects I've seen.

Were they just piling up cash in the parking lot to set it on fire?


At least part of the funding went to research on hard science related to VR, such as tracking, lenses, CV, 3D mapping etc. And it paid off, IMO Meta has the best hardware and software foundation for delivering VR, and projects like Hyperscape (off-the-shelf, high-fidelity 3D mapping) are stunning.

Whether it was worth it is another question, but I would not be surprised is recycled to power a futuristic AI interface or something similar at some point.


Even within the XR industry, we had no clue where all that money went. During the metaverse debacle, the entire industry stagnated. Once metaverse failed, XR adjacent shops started to fail. There was no hardware or technique innovation shared with the rest of the industry, and at the time the technology was pretty well settled.

Since then we lost all the medium players and it's basically just Facebook, Valve, and Apple.


The sad part about this fact is that the tech is mated to a completely rotten ecosystem. If it were sold off I'd be excited to try it.


Big company syndrome has existed for a long time. It’s almost impossible to innovate or move fast with 8 levels of management and bloated codebases. That’s why startups exist.


Everyone is missing the why here, this only happens because the whole stack is vertically integrated. Even if say LG wanted to make a box like this and update it for 10 years they couldn’t, they don’t make the chips. Qualcomm straight up refuses to support chips through this many Android releases. Even if device manufacturers want to support devices forever it won’t matter if the actual SoC platform drops support.


While the vertical integration is definitely the best way to get it done, it's not strictly required as long as there is good enough documentation for a platform. Linux originally supported Intel without any Intel engineers even knowing it existed.

Also consider Apple's chips, which have gotten Linux support without Apple ever submitting a single line of code.

While Qualcomm's behaviour is definitely a massive bummer (not to mention Qualcomm's competitors), it doesn't stop manufacturers from supporting their devices. It merely stops maintaining support from being cheap and easy.


Not only that, "vertical integration" is a red herring. If you had a "vertically integrated" device made entirely by Qualcomm and they stopped supporting it after 3 years then the vertical integration buys you nothing. The actual problem is that Qualcomm sucks.


> Linux originally supported Intel without any Intel engineers even knowing it existed.

It should be noted that Intel makes CPUs, while Qualcomm makes SoCs, which include much more than just a CPU. Usually supporting the CPU is the easiest part, the rest is the issue.

That said, when device OEMs release the kernel sources, modders are able to update custom roms for a long time, so I doubt this is just a Qualcomm issue.


> It should be noted that Intel makes CPUs, while Qualcomm makes SoCs, which include much more than just a CPU. Usually supporting the CPU is the easiest part, the rest is the issue.

Here's a random 15 year old Intel PC (you can also do this on many current ones):

  $ lspci | grep -v Intel
  [no output]
Every piece of silicon in it is made by Intel and most of them, including the GPU, are integrated into the CPU. And it's all supported by current Linux kernels. The same is true for many AMD systems except that you'll usually see a third party network or storage controller which is itself still supported.

So no, it's a Qualcomm problem.


They update the roms while keeping everything provided by Qualcomm the same

so basically the kernel is frozen even if the android version is updated


The kernel is usually frozen but sometimes projects like PostmarketOS can use the changes to upstream the changes and add general Linux support.

Anyone can make a diff between the upstream kernel and the Qualcomm kernel. Maintaining these changes into later versions of the kernel will be quite challenging, but the base is already there.

That said, phones also come with plenty of binary drivers and those cannot be ported. That's an important reason not to bother with later kernel versions in custom ROMs: after all of your hard work, the end result will be missing important features such as GPU acceleration.


What do you think are the reasons pc's don't need Dell xps 13 9350 Windows and Lenovo ThinkPad T14s Gen 6 Linux and so on but phones need Galaxy S26 Linux, Xiaomi 16 Linux and so on?


Because ARM lacks some of the device auto-discovery features that amd64 provides for free, unless you're lucky and use a device with ACPI+DSDTs on ARM. You need a special build for the hardware, but you don't need to alter the source code.

Custom kernels also exist for amd64 devices, often including workarounds and patches that are not in mainline to improve performance or compatibility.

As a vendor, that requires practically zero extra effort.

https://wiki.postmarketos.org/wiki/Devices has a list of devices that run either mainline or almost-mainline Linux. Only the "downstream" devices require vendor Linux kernels. Of course, hardware support is partial for most of these devices because vendors haven't contributed proper upstreamable drivers and volunteers haven't had the time to write them yet, but it's not like every ARM device needs a special kernel fork, that's just something ARM vendors do out of laziness.


> Even if device manufacturers want to support devices forever it won’t matter if the actual SoC platform drops support.

Yeah, so that's not a why, that's a how (and it's not necessary or sufficient anymore, see the Samsung and Pixel reference).

The why seems very much what the article covers.


Yet Microsoft figured this out decades ago.

I (well my mom) had a supported with security updates version of Windows 7 on my 2007 Mac Mini (not a typo) until 2023.


That was from when Macs ran Intel and could easily dual boot Windows. I still have an old Mac Book Pro with Windows 10 on it. Updates only stopped recently because Win10 is at end of life. I've been meaning to blow everything out and install Linux.


I am giving props to Microsoft because it did wrangle an industry together to standardize where one company makes the operating system and other companies make the hardware yet you can still upgrade your operating system even without the support of the vendor.

Yet Google can’t seem to make that happen.


because google doesnt deliver the full OS, it delivers a bunch of stuff that the vendors then bastardize and use with insane drivers and crap from SoC vendors.

they should NEVER accept any of the binary only crap drivers, they should demand code be upstream or wont buy. But they dont care. Google doesnt care.


> Qualcomm straight up refuses to support chips through this many Android releases.

That's not entirely accurate. They do provide chips with extended support, such as the QCM6490 in the Fairphone 5. These are not popular because most of the market demands high performance, and companies profit from churning out products every year, but solutions exist for consumers who value stability and reliability over chasing trends and specs.


If you read the article the actual "why" is because the CEO personally requested it and gave an effectively unlimited budget.


No need to be rude. The person above is adding a new insight to the conversation.

Vertical integration makes it possible but motivation makes it happen. Where is Samsung's ultra LTS Exynos device?


I think it's more a combination of vertical integration and Nvidia upper management actually wanting to provide support for so long. Apple, Google, and Samsung all make smartphones with their own chips, and yet none of them support running the newest OS on 10+ year old devices.


I have to wonder if the Nintendo Switch picking up the Tegra X1 SOC has something to do with it. There's a good chance a lot of components of the (custom microkernel) operating system are derived from Android, and with the Switch receiving active support for so long, I wouldn't be surprised if the work between the Shield TV and Switch are related.

With the Switch being shipped for nearly 10 years, it pales in comparison to the shelf life of most any processor Apple, Google, Samsung, Qualcomm, MediaTek (?) push out.

Though Apple in particular is interesting, as their Apple TV lineup also has the same long legs, with the Apple TV HD/4th Gen releasing in 2015 and receiving the latest OS.


Qualcomm's industrial ARM SoC are supported for nearly 10 years: Qualcomm QCM6490 in Fairphone 5 gets 8 years security updates.


It is called a legal binding contract, business use it all the time to enforce support.


Contracts can be broken and resolved with money. Happens all the time.


Yes, and lawsuits do exist as well.

Point being, blame lies not only on Qualcomm as Google advocates tend to point out.


I've only used it when I'm in a pinch but it's handy. Blowing up mobile apps to a larger screen and multitasking isn't ideal certainly but I've been able to handle "email job" type activities while out of pocket. That said I've never heard of anyone else who's actually used it.


The RK3568 is an interesting choice, why no the H700 or something with a good amount of mainline kernel support already?


I don’t know about the h700, but some of those allwinner chips used to be super cheap around Covid. I checked and didn’t find prices? Does anyone know where the price is now?


Rk3568 doesn't have good mainline support??


There's a scrapyard right by my hometown with a fancy billboard, like the ones for the lottery that have the number displays. It's just for showing copper prices, bright copper, copper #1 and copper #2. There's so much money in it they can afford to advertise now.


The price of copper is not extraordinarily high.

https://www.gurufocus.com/economic_indicators/4553/inflation...


I couldn’t immediately see the price not adjusted for inflation. If copper has held it’s value better than other proceeds of crime that could still make it more attractive.


It’s incredible how bad driver support is the ARM space. I was looking into some of the various Ambernic handhelds and their Linux firmware. Despite their SoCs being advertised as having Vulkan 1.1 support every firmware for the device ships with it disabled.


So many chipmakers and development board manufacturers see software/driver support as some kind of necessary evil--a chore that they grudgingly do because they have to, and they will do the absolute minimum amount of work, with barely enough quality to sell their hardware.


It bewilders me. Software's gotta be easier than hardware right? Not that either is easy but as a software engineer, the engineering that goes into modern hardware mystifies me.


It's different definitions of "easy."

With hardware, you have about one billion validation tests and QA processes, because when you're done, you're done and it had better work. Fixing an "issue" is very very expensive, and you want to get rid of them. However, this also makes the process more of, to stereotype, an "engineer's engineering" practice. It's very rules based, and if everything follows the rules and passes the tests, it's done. It doesn't matter how "hacky" or "badly architected" or "nasty" the input product is, when it works, it works. And, when it's done, it's done.

On the other hand, software is highly human-oriented and subjective, and it's a continuous process. With Linux working the way it does, with an intentionally hostile kernel interface, driver software is even more so. With Linux drivers you basically chose to either get them upstreamed (a massive undertaking in personality management, but Valve's choice here), deal with maintaining them in perpetuity at enormous cost as every release will break them (not common), or give up and release a point in time snapshot and ride into the sunset (which is what most people do). I don't really think this is easier than hardware, it's just a different thing.


From the outside looking in. It really seems like both fields are working around each other in weird ways, somewhat enforced by backwards compatibility and historical path dependence.

The transition from more homogeneous architectures to the very heterogeneous and distributed architectures of today has never really been all that well accounted for, just lots of abstractions that have been papered over and work for the most part. Power management being the most common place these mismatches seem to surface.

I do wonder if it will ever be economical to "fix" some of these lower level issues or if we are stuck on this path dependent trajectory like the recurrent laryngeal nerve in our bodies.


> intentionally hostile kernel interface

If open-sourcing your entire kernel is being "hostile", I don't think that there is or ever was a "friendly" OS.


I think what they were referencing with that is that the kernel hardware interface is unstable, it changes literally every version, which is why you went to upstream it so you don't have to keep it up yourself after that.


I've done both. There are difficulties with both but overall I would say software is significantly more difficult than hardware.

Most hardware is actually relatively simple (though hardware engineers do their best to turn it into an incomprehensible mess). Software can get pretty much arbitrarily complex.

In a way I suspect it's because hardware engineers are mostly old fogies stuck in the 80s using 80s technologies like Verilog. They haven't evolved the tools that software developers have that enable them to write extremely complicated programs.

I have hope for Veryl though.


Wow, super hard disagree, comment here sounds like the typical arrogance hardware engineers face from people in software who've never really done the job or have some superficial experiences.

I won't blindly state "software is easier" but software is definitely easier to modify, iterate and fix, which is why sofware tools and resulting applications can evolve so fast.

I have done both HW & SW, routinely do so, and switch between deep hardware jobs and deep software so I'm qualified to speak.

If you're blinking a light or doing something with Bluetooth you can buy microcontrollers that have this capability and yes that hardware is simple.

But have you ever DESIGNED a microcontroller, let alone a modern processor or complex system ?

Getting something "simple" like a microcontroller to reliably start-up involves complex power sequencing, making sure an oscillator works, a phase-locked-loop that behaves correctly and that's just "to make a clock signal run at a frequency" we're not talking about implementing PCIe Gen5 or RDMA over 100Gbps Ethernet.

Hardware engineers definitely welcome better tools but the cost of using an unproven tool or tool that might have "a few" corner cases resulting in your $5-million SoC not working is a hard risk to tolerate, so sadly(and to our pain) we end up using proven but arcane infrastructure.

Software in contrast can evolve faster because you can "fix it in software". New tools can be readily tested, iterated on and deployed.


> But have you ever DESIGNED a microcontroller

Yes... But in fairness I was just talking about the digital RTL, not the messy analogue stuff (PLLs, power/reset, etc.) I've never done that.

> but software is definitely easier to modify, iterate and fix,

Definitely true.

> which is why sofware tools and resulting applications can evolve so fast.

Not sure I agree here though. It seems to me that EDA tools evolve super slowly because a) hardware engineers are timid old fogies who never want to learn anything new, and b) the big three have a monopoly on tooling.


What do you think about Atopile? I'm not a hardware person yet, but these seem similar.

https://atopile.io/


PCB and RTL are completely separate disciplines.


Software can always ship a new update for bugs or features.

Hardware not so much


In my experience, hardware companies all believe that software is trivial nonsense they don't need to spend any effort on. Consequently, the software that drives their hardware really sucks.


Software is easier than hardware in general but companies generally pay their hardware guys 25-50% less than their software counterparts


People repeat this line a lot but I don’t think it’s true. Companies like Intel, AMD, Arm, Broadcom, etc. afaik all pay their software folks of equivalent YoE or level roughly the same as their hardware folks. To the extent there’s any difference, it’s much less than 25%.

OTOH, there’s a small slice of (mainly) software companies like Google and Meta, along with Unicorn private companies, that skew the average software engineer salary high. Then there’s a long tail of “old school” hardware companies like TI, Motorola, Atmel, Microchip, and tons of smaller less well known companies that all pay much lower than Google. But they pay their software people poorly as well.

So if you just look at “average software engineer salary” vs “average hardware engineer salary” it appears that SW people are making 50% more than HW people, but it’s not at the same companies.


> Companies like Intel, AMD, Arm, Broadcom, etc. afaik all pay their software folks of equivalent YoE or level roughly the same as their hardware folks.

This is a fairly new phenomenon and it's mostly a consequence of the AI hype wave driving investment in hardware. Wages have mostly caught up at the big boy hardware companies but you'll still generally see a disparity outside that big group.


Come to think of it, for them it is basically customer support.

Most will want to outsource it as cheap as possible and/or push it to the community. They won't care if it takes an eternity for the customer to get their issues solved as long as new customers keep buying.

And a few companies will see an opportunity to bring better customer care as an advantage and/or integrate it in their philosophy.


And it's the reason why for several years I didn't consider buying anything that had an AMD card (not now, but for many many years it was insanity).


Are you talking about the FGLRX drivers on Linux desktops?

Or their Windows driver quality back then?

I remember them both being pretty brutal.


The linux desktop was my reason.


But - doesn’t open sourcing it kinda make it someone else’s chore?

Obviously it has to “work” at sale but ongoing maintenance could be shared with the community.


I would recommend the Anbernic RG353M running ROCKNIX, or for a more powerful device, Retroid's Pocket 5 running ROCKNIX. Most other options have awful software support and are just e-waste, unfortunately.


They're stuck in the building model of making semi-custom SoCs for enormous corporations and releasing/developing drivers for them in extreme NDA environments.

It's fine (or arguably not) for locked down corporate devices.

Not so fine for building computers people want to use and own themselves.


At what point will the massive investments into AI show a respectable return? With the literal Trillion dollars OpenAI is constantly trying to raise what type of revenue would make that type of investment make sense? Even if you're incredibly bullish I don't know how you make that math work anymore.


I think it’s hard for individuals to think at the scale of very large institutional investors. They have lakes of money [1][2] that they have to invest in a balanced way, including investing a small percentage into “it probably won’t work but if it does we’ll make a fortune”-type bets. Given the size of these funds even a small percentage is a very large number.

There are also a finite number of opportunities to invest in, so companies that have “buzz” can create a bidding war among potential investors that drives up valuations.

So that’s one possible reason but in the end we can’t know why another investor invests the way they do. We assume that the investor in making a rational decision based on their portfolio. It’s fun to speculate about though, which is why there’s so much press attention.

[1] https://en.wikipedia.org/wiki/List_of_largest_pension_scheme...

[2] https://en.wikipedia.org/wiki/List_of_sovereign_wealth_funds...


The problem is we now have municipal and state governments taking on infrastructural investments (usually via subsidies) and energy companies racing to meet the load demands. There are all kinds of institutions both private and public dumping obscene amounts money into this speculative investment that can’t be a winner for everybody.

What happens to the ones that built for projects that end up failing? Seems to me the only way the story ends is with taxpayers on the hook once again.


Yes, many of us are investing in this (even indirectly) and may not realize it! The same rules still apply for municipal and state treasuries though: Only a small percentage of the overall portfolio should be allocated to high-risk investments.

Power generation, power grids are more generally useful today and less speculative than trying to win the AI race, so the risk for those types of things is somewhat lower, but there IS risk even in those.


I think the concern is something like the huge Entergy investment going on in Louisiana. Facebook basically cut a deal with them to build out all kinds of electrical load just for them. We also saw how committed Facebook was to the metaverse - they basically spent the GDP of a small nation, nothing came of it, fired a bunch of people, and moved on.

Entergy is not just going to sit around and take the L if the project doesn’t ultimately turn out to be a good long-term investment. They’re simply going to pass the cost on to their customers in the region (more so than they already plan to in the event of success). Meanwhile Louisiana taxpayers are footing the bill for all the subsidies going through these projects.

So yeah I agree it’s not quite as high risk because at least there’s some in infrastructural investment, but that’s not the kind of investment that is really needed in the region right now and having that extra capacity is not a good thing unfortunately.

To be clear I’m not really disagreeing with you. I’m just kind of bickering over the nuances lol


I think the market & political/economic actors as a whole are justifying these investments on the basis that the benefit is distributed across the labour market generally.

That is, it doesn't matter so much if OpenAI and individual investors get fleeced, if there's a 20-50% labour cost reduction generally for capitalism as a whole (especially cost reduction in our own tech professions, which have been very well paid for a generation) -- Institutional investors and political actors will benefit regardless, by increasing the productivity or rate of exploitation of intellectual / information workers.


Why Fears of a Trillion-Dollar AI Bubble Are Growing - https://www.bloomberg.com/news/articles/2025-10-04/why-ai-bu...

What Would an AI Crash Look Like? - https://www.bloomberg.com/news/newsletters/2025-10-12/what-h...


The bull thesis is you get human level intelligence and then it can do jobs like us.


As others have said before me: "the hype IS the product".


That's just a roundabout way of saying they don't expect their money back they just hope to sell before the bubble bursts.


Modern American investing 101.


Which is the same thing as a scam.


It's gonna be very fun to watch SA being tried for fraud and deceiving investors about the "future profits" of his startup.


Au contraire him and associated people (Musk, among others) will be or are being received as heroes in his class for helping to seemingly break the bargaining power of software engineers and middle/upper-middle class information workers and creatives generally.


The folks who would have to press charges are the folks who would be far too embarrassed to admit how transparent the fraud was they fell for - and nuke any remaining asset value.

It’s why Musk is also safe from similar problems.


Akshually, watch the hands. He never talks about profits at all, only about "disruptions". No refunds.


What fraud?

You should view your contributions as a donation. What donation has a ROI?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: