Engineering or marketing ? I doubt Zuckerberg or Altman have much involvement in engineering after their products took off. After a certain point they were no longer engineers of their products.
I think Charlie Munger or Warren Buffet said once about an iPad for reading that : it would be terrible to read on a device where it was so easy to get distracted with the internet on your fingers.
E readers work for a reason. You aren’t distracted (the slow browser in it is hardly a distraction)
If hospitals are so concerned about cutting costs, getting sued is probably worse. However they are all insured against malpractice. I would be careful about insurers who could default if they find too many malpractice claims.
Isn't it also in the insurer's best interest that the hospitals do good work? They'd be another force against hospitals using AI to diagnose or misdiagnose people.
Of course, given that these are legal cases, it would take years for any consequences to be turned into actions.
To be frank I'm more concerned about non-litiguous countries here as the potential downsides are much lower to roll-out "AI radiologists". Some of those countries have multi-month or even year-long waitlists for specialist consultations so it might even be more tempting from a healthcare management level.
For folks with long wait times, maybe the advantage of "immediate access to AI radiologist" beats out "wait for human radiologist"? Would be interesting to weigh those harms against each other.
> For folks with long wait times, maybe the advantage of "immediate access to AI radiologist" beats out "wait for human radiologist"? Would be interesting to weigh those harms against each other.
The harm of getting surgery to get tissue removed due to a false positive seems a pretty big.
It's an interesting one. From some ex-colleagues, waits in the UK can be up to 5 years for a consultation, not to mention the actual procedure itself. When asked if they would rather use AI for a first initial screening, almost all of those colleagues immediately said yes.
The ideal work/coding resolutions and sizes for macOS that I would suggest if you are going down this rabbit hole.
24 inch 1080p
24 inch 4k (2x scaling)
27 inch 1440p
27 inch 5k (2x scaling)
32 inch 6k (2x scaling)
Other sizes are going to either look bizarre or you’ll have to deal with fractional scaling.
Given that 4k is common in 27/32 inches and those are cheap displays these kinds of problems are expected. I have personally refused to accept in the past that 27 inch 4k isn’t as bad as people say and got one myself only to regret buying it. Get the correct size and scaling and your life will be peaceful.
I would recommend the same for Linux and Windows too tbh but people who game might be fine with other sizes and resolutions.
If you actually care about this stuff you are going to run something like https://github.com/waydabber/BetterDisplay which easily allows for HiDPI @ 4K resolution, it does not "look bizarre" or "require fractional scaling". This is what the OP is about. I do the same thing, I run native res w/ HiDPI on a 27" 4K screen as my only monitor, works great.
Sure, and that is the real tragedy here. The person I'm replying to is just pointing out that native support for high res sucks, which is true, but the real problem is what limits there are on 3rd party support.
32" 4k display at fraction scaling of 1.5 (150%) is fine for my day-to-day work (Excel, VS Code, Word, Web browsing, Teams etc.). It delivers sharp enough text at an effective resolution of 2560x1440 px. There are many 32" 4k displays that are affordable and good enough for office workers. I work in a brightly lit room, so I find that monitor brightness (over 350 nits) is the most important monitor feature for me, over text sharpness, color accuracy, or refresh rate.
I have dual 27" monitors, both at work and at home. At work, they're 4K monitors, because that's all they have in this size for some reason (LG if it makes a difference). At home, my own monitors are ASUS ProArt 1440p monitors. I run Linux in both places.
I really like my 1440p monitors at home more than the 4K monitors at work. At work, I'm always dealing with scaling and font size issues, but at home everything looks perfect. So I think you're onto something here: 1440p just seems to be a better resolution on a 27" panel.
For me it would be 16-27" 4k is fine, but and as you go up to 32" I'd be wanting 5 or 6k ideally as it's quite noticable for text (even when high DPI scaling is working and across operating systems).
If you're running the 4k display at 1440p, I'd agreed. But I run two 4k 60hz displays on a 16" MacBook Pro work laptop at 2880x1440 effective resolution and it looks fine to me. Yes, it doesn't look as good as the Studio Display I have on my personal Mac. But even though I have the MacBook Pro screen right next to the 27" monitors, I just don't notice the difference as I switch between them all day long.
I'm not saying there is no difference. But I suspect how one reacts to it is highly dependent on the person. I wear glasses that aren't perfectly focused for either screen, but they're good enough to get the job done - and mostly importantly, I get to use my two 4k 27" monitors to give me the same effective resolution as a Studio Display at far less money than two Studio Displays.
That's what I'm pointing out. The person I replied to thinks it does: "I have personally refused to accept in the past that 27 inch 4k isn’t as bad as people say and got one myself only to regret buying it. Get the correct size and scaling and your life will be peaceful. I would recommend the same for Linux and Windows too tbh but people who game might be fine with other sizes and resolutions."
In my experience the ISP generally fixes a /64 for each customer. So if in the future you change your ISP, you might want to keep the remaining addresses same while just using a script to replace the preceding /64 address.
My ISPs change the /64 more often. So I use the ULA a lot more often. My router runs its own DNS server and then it advertises this DNS server using a ULA address.
I've never heard of an end user ISP that would announce and route a customer owned block of addresses. They'll all give you a static allocation, but it will be in their block. Maybe if you were a huge customer they could do it... but I can't believe they would go to that much trouble for the measly <$100/month they get from me.
Also, I very much don't want all my outbound internet traffic to come from a permanent address range I am publicly known to own. I'd still want an ephemeral /56 for outbound traffic that changed from time to time.
Typically it's similar to ipv4, they try to assign the same address/prefix for the same MAC/DUID. The most common reason to lose your addresses is replacing your router. Hopefully new routers allow you to set the dhcpv6 DUID somehow...
I haven't experienced this. For me it's statically assigned but my guess is that the PON serial and/or MAC is being used or the customer ID. I think the ISPs have gotten very automated these days and everything seems to be some sort of SDN. It saves lot of labour hours in troubleshooting like customer forgetting their wifi passwords to their routers.
Interesting. Honestly I like having control over it, that would annoy me. I deliberately change the DUID in dhcpcd to force my public addresses to change every so often.
Maybe in 50 years the cache of CPUs and GPUs will be 1TB. Enough to run multiple LLMs (a model entirely run for each task). Having robots like in the movies would need LLMs much much faster than what we see today.
In order to go from 360p video 15 years ago to 4K HDR today, I have upgraded from a 2mbps 802.11g WiFi on a 1366x768 display to a 200mbps connection on 802.11ax and a 55 inch 4k television.
The experience is quite immersive and well worth the upgrade that happened very progressively (WiFi 5 1080p then WiFi 6/7 4K).
At the same time, we had cheap consumer gigabit ethernet, and still have cheap consumer gigabit ethernet. 2.5 is getting there price-wise, but switches are still somewhat rare/expensive.
Emphasis on "somewhat" - I was able to build a 10GB backbone for my NAS and such for less than $200 or so with a CRS305 and some direct connect cables; looks like the CRS304 would have made this even easier ...
To be fair, I only started this because the dock I got had 10G - https://www.owc.com/solutions/thunderbolt-pro-dock and I saw some 10G cards on eBay cheap and my old Nortel switch had a 10G uplink and ... well, you know how it goes!
As someone who works with networking (consumer prosumer enterprise everything) the problem is far more complex than : make it open.
Manufacturers can support devices for long but it costs money which the consumers / businesses aren’t willing to pay or value. Cybersecurity is a joke and the general consensus is : we will pay for things as and when there is a fire. We don’t put a price on prevention because we can’t really show it to shareholders how we profited from not being attacked since we blocked those. So we create an arbitrary certification and pass things according to it. This certification doesn’t say anything about firmware. But if we do get attacked then we can convince the shareholders to spend money on better equipment this financial year and then not bother until the next time we have a problem.
Some of these certifications focus on what the devices allow you to do (like acls and firewalls) and see if they pass these tests. But actually looking at the firmware and finding vulnerabilities is not in scope.
reply