Can we get proper HDR support first in macOS? If I enable HDR on my LG OLED monitor it looks completely washed out and blacks are grey. Windows 11 HDR works fine.
Really? I thought it's always been that HDR was notorious on Windows, hopeless on Linux, and only really worked in a plug-and-play manner on Mac, unless your display has an incorrect profile or something/
MacOS does wash out SDR content in HDR mode specifically on non-Apple monitors. An HDR video playing in windowed mode will look fine but all the UI around it has black and white levels very close to grey.
Edit: to be clear, macOS itself (Cocoa elements) is all SDR content and thus washed out.
The white and black levels of the UX are supposed to stay in SDR. That's a feature not a bug.
If you mean the interface isn't bright enough, that's intended behavior.
If the black point is somehow raised, then that's bizarre and definitely unintended behavior. And I honestly can't even imagine what could be causing that to happen. It does seem like that it would have to be a serious macOS bug.
You should post a photo of your monitor, comparing a black #000 image in Preview with a pitch-black frame from a video. People edit HDR video on Macs, and I've never heard of this happening before.
I don't think so. Windows 11 has a HDR calibration utility that allows you to adjust brightness and HDR and it maintains blacks being perfectly black (especially with my OLED). When I enable HDR on macOS whatever settings I try, including adjusting brightness and contrast on the monitor the blacks look completely washed out and grey. HDR DOES seem to work correctly on macOS but only if you use Mac displays.
That’s the statement I found last time I went down this rabbit hole, that they don’t have physical brightness info for third-party displays so it just can’t be done any better. But I don’t understand how this can lead to making the black point terrible. Black should be the one color every emissive colorspace agrees on.
You can find a docker-compose.yml file to get some idea.
Appears to be using MariaDB.
They shut down OCSP responders and expiry email reminders, so there really is no need to have a database apart from rate limits, auth data, and caching.
For Certificate Transparency, they are submitted to Google and CloudFlare run trees but I don't think LetsEncrypt run their own logs.
Not really: burstable (“t”) instances haven't been updated in years. The current generation (“t4g”) still use Graviton2 processors. I get the impression that they would vastly prefer cost-conscious users to use spot instances.
Ah, thank you for pointing these out! I'd missed the introduction of “flex” instance types (apparently in May last year[0] – still long overdue relative to the introduction of T4g in September 2020[1]). Curious that so far, they all appear to be Intel-based (C7i, M7i, C8i, M8i, and R8i). M7i-flex instances also cost 45% more than the corresponding T4g instances. That's sort of understandable, as the generational improvements probably bring more than 45% better performance for most workloads, but it also makes them harder to justify for the sorts of long-running,-mostly-idle duties they're being touted for.
If you're interested in the underlying technology of flex there's some reinvent talks from last year on YouTube where they acknowledge it's based on VM live migration which is I think the first public reference to AWS using migration in their products.
I suspect the burstable types were always priced too cheaply and were more about attracting the cheap market segment which they don't need now in the days of AI money.
Burstable pricing gets complex quick when adding in the option to burst to full usage. Flex seems a lot simpler which is great.
Crazy this has so little upvotes. I thought it was a very good discussion with lots of interesting perspectives. I assume, the "wisdom" of the HN crowd see's Rogan and goes... Nope.
Jensen Huang is an incredible storyteller. Lots of wisdom and original stories of the start of NVIDIA, how Sega helps them, the origin of FSD Tesla and OpenAI. Lots of personal and the growth of Jensen himself. Respectful.
The very end of the interview where he talks about coming to America and going to school in Kentucky (very rough and poor area) was a great story that most people don't know about him.
I'm looking at deploying SeaWeedFS but the problem is cloud block storage costs. I need 3-4TB and Vultr costs $62.50/mo for 2.5TB. DigitalOcean $300/mo for 3TB. AWS using legacy magnetic EBS storage $150/mo... GCP persistent disk standard $120/mo.
Any alternatives besides racking own servers?
*EDIT* Did a little ChatGPT and it recommended tiny t4g.micro then use EBS of type cold HDD (sc1). Not gonna be fast, but for offsite backup will probably do the trick.
I'm confused why you would want to turn an expensive thing (cloud block storage) into a cheaper thing (cloud object storage) with worse durability in a way that is more effort to run?
I'm not saying it's wrong since I don't know what it's for, I'm just wondering what the use-case could be.
I've quickly come to this conclusion. Essentially looking for offsite backup of my NAS and currently paying around $15-$20/mo to Backblaze. I thought I might be able to roll my own object store for cheaper but that was idiotic. :-)
Totally fair. There are some situations where you can "undercut" cloud native object storage on a per TB basis (e.g. you have a big dedi at Hetzner with 50TB or 100TB of mirrored disk) but you pay a cost in operational overhead and durability vs managed object store. It's really hard to make the economics work at $20 price point, if you get up to a few $100 or more then there are some situations where it can make sense.
For backup to a dedi you don't really need to bother running the object store though.
reply