Hacker Newsnew | past | comments | ask | show | jobs | submit | nodesocket's commentslogin

Can we get proper HDR support first in macOS? If I enable HDR on my LG OLED monitor it looks completely washed out and blacks are grey. Windows 11 HDR works fine.

Really? I thought it's always been that HDR was notorious on Windows, hopeless on Linux, and only really worked in a plug-and-play manner on Mac, unless your display has an incorrect profile or something/

https://www.youtube.com/shorts/sx9TUNv80RE


MacOS does wash out SDR content in HDR mode specifically on non-Apple monitors. An HDR video playing in windowed mode will look fine but all the UI around it has black and white levels very close to grey.

Edit: to be clear, macOS itself (Cocoa elements) is all SDR content and thus washed out.


Define "washed out"?

The white and black levels of the UX are supposed to stay in SDR. That's a feature not a bug.

If you mean the interface isn't bright enough, that's intended behavior.

If the black point is somehow raised, then that's bizarre and definitely unintended behavior. And I honestly can't even imagine what could be causing that to happen. It does seem like that it would have to be a serious macOS bug.

You should post a photo of your monitor, comparing a black #000 image in Preview with a pitch-black frame from a video. People edit HDR video on Macs, and I've never heard of this happening before.


That's intended behavior for monitor limited in peak brightness

I don't think so. Windows 11 has a HDR calibration utility that allows you to adjust brightness and HDR and it maintains blacks being perfectly black (especially with my OLED). When I enable HDR on macOS whatever settings I try, including adjusting brightness and contrast on the monitor the blacks look completely washed out and grey. HDR DOES seem to work correctly on macOS but only if you use Mac displays.

That’s the statement I found last time I went down this rabbit hole, that they don’t have physical brightness info for third-party displays so it just can’t be done any better. But I don’t understand how this can lead to making the black point terrible. Black should be the one color every emissive colorspace agrees on.

Actually, intended behavior in general. Even on their own displays the UI looks grey when HDR is playing.

Which, personally, I find to be extremely ugly and gross and I do not understand why they thought this was a good idea.


Oh, that explains why it looked so odd when I enabled HDR on my Studio.

Huh, so that’s why HDR looks like shit on my Mac Studio.

Works well on Linux, just toggle a checkmark in the settings.

AI is arguably more important than whatever gaming gimmick you're talking about.

Excited for t5g instances to release... Eventually.

lol, what a headline. Might as well be ICE is kicking dogs and drowning babies.

Ya, why not just alias old api calls to the new if implementation details changed?

Would be interesting to hear what database they are using and how they are doing replication? Is it simple master / slave or multi-master?

Let’s Encrypt currently has a single primary with a handful of replicas, split across a primary and backup DC.

We’re in progress of adopting Vitess to shard into a handful of smaller instances, as our single big database is getting unwieldy.


Let’s Encrypt is an incredible project and the internet is better off for it. If you ever have questions about vitess or need help please let me know.

Thanks. Would love to see a tech blog post once you get Vitess implemented.

We’ve already started drafting it :)

https://github.com/letsencrypt/boulder

You can find a docker-compose.yml file to get some idea.

Appears to be using MariaDB.

They shut down OCSP responders and expiry email reminders, so there really is no need to have a database apart from rate limits, auth data, and caching.

For Certificate Transparency, they are submitted to Google and CloudFlare run trees but I don't think LetsEncrypt run their own logs.


Let’s Encrypt does operate CT logs. I wrote a blog post about our current-generation logs at https://letsencrypt.org/2024/03/14/introducing-sunlight

I assume they want to store metadata instead of having to pull from the certificates itself, but maybe that’s actually easier and more performant.

When building cli and infrastructure tools and using AI my goto is go. Pardon the pun.

Are they updating the t class instances to t5g as well?

They usually end up upgrading most instance types to new graviton generations, it just takes time to do the full rollout.

Not really: burstable (“t”) instances haven't been updated in years. The current generation (“t4g”) still use Graviton2 processors. I get the impression that they would vastly prefer cost-conscious users to use spot instances.

the -flex suffix variants seem to be the new spiritual successor to the t burstable class.

eg c7i-flex.large, etc.


Ah, thank you for pointing these out! I'd missed the introduction of “flex” instance types (apparently in May last year[0] – still long overdue relative to the introduction of T4g in September 2020[1]). Curious that so far, they all appear to be Intel-based (C7i, M7i, C8i, M8i, and R8i). M7i-flex instances also cost 45% more than the corresponding T4g instances. That's sort of understandable, as the generational improvements probably bring more than 45% better performance for most workloads, but it also makes them harder to justify for the sorts of long-running,-mostly-idle duties they're being touted for.

[0]: https://aws.amazon.com/blogs/aws/new-compute-optimized-c7i-f... [1]: https://aws.amazon.com/blogs/aws/new-t4g-instances-burstable...


If you're interested in the underlying technology of flex there's some reinvent talks from last year on YouTube where they acknowledge it's based on VM live migration which is I think the first public reference to AWS using migration in their products.

I suspect the burstable types were always priced too cheaply and were more about attracting the cheap market segment which they don't need now in the days of AI money.

Burstable pricing gets complex quick when adding in the option to burst to full usage. Flex seems a lot simpler which is great.


Crazy this has so little upvotes. I thought it was a very good discussion with lots of interesting perspectives. I assume, the "wisdom" of the HN crowd see's Rogan and goes... Nope.

Jensen Huang is an incredible storyteller. Lots of wisdom and original stories of the start of NVIDIA, how Sega helps them, the origin of FSD Tesla and OpenAI. Lots of personal and the growth of Jensen himself. Respectful.

The very end of the interview where he talks about coming to America and going to school in Kentucky (very rough and poor area) was a great story that most people don't know about him.

Anyone who is interested in AI and how Jensen thinks should listen to this. Lots of good / interesting arguments here.

Nobody cares. we are tired of AI.

I'm looking at deploying SeaWeedFS but the problem is cloud block storage costs. I need 3-4TB and Vultr costs $62.50/mo for 2.5TB. DigitalOcean $300/mo for 3TB. AWS using legacy magnetic EBS storage $150/mo... GCP persistent disk standard $120/mo.

Any alternatives besides racking own servers?

*EDIT* Did a little ChatGPT and it recommended tiny t4g.micro then use EBS of type cold HDD (sc1). Not gonna be fast, but for offsite backup will probably do the trick.


I'm confused why you would want to turn an expensive thing (cloud block storage) into a cheaper thing (cloud object storage) with worse durability in a way that is more effort to run?

I'm not saying it's wrong since I don't know what it's for, I'm just wondering what the use-case could be.


I've quickly come to this conclusion. Essentially looking for offsite backup of my NAS and currently paying around $15-$20/mo to Backblaze. I thought I might be able to roll my own object store for cheaper but that was idiotic. :-)

Totally fair. There are some situations where you can "undercut" cloud native object storage on a per TB basis (e.g. you have a big dedi at Hetzner with 50TB or 100TB of mirrored disk) but you pay a cost in operational overhead and durability vs managed object store. It's really hard to make the economics work at $20 price point, if you get up to a few $100 or more then there are some situations where it can make sense.

For backup to a dedi you don't really need to bother running the object store though.


Hetzner VM with mounted storage box.

https://www.hetzner.com/storage/storage-box/

It's not as fast as local storage of course, but it's cheap!


Shot you an email about how we can potentially help you with this.

Doesn't Chrome Developer tools automatically un-minify?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: