Hacker Newsnew | past | comments | ask | show | jobs | submit | radicality's commentslogin

Wow, this should be higher up and with a different title.

People who are paying $200/month for a defined service, and think they are using `gpt5.3-codex`, are getting their requests silently routed to a less capable model without telling the user at all. Why? - because openAI claims gpt5.3-codex is too powerful and dangerous in regards to cybersecurity, and their system randomly flags accounts. And the way to unlock access to a model you thought you already were paying $200/month for, is upload your ID and do identity verification...


Most people commenting on the title find it too negative. Of course, the situation is pretty negative for OpenAI.

Watch the HN techbros defend this

I use brew but willing to try out Macports. How come the package install instructions seem to require sudo under macports? Does that not carry more risk during the install ?

Because it requires access to /opt/local. It drops down to the macports user for all the actual fetching and management.

There’s a way somewhere deep in settings to disable those. I still have UberEats notifications for food arrival, but was able to disable all other ones while digging through all the settings

If you remember how please let me know because I could not figure it out last time I tried.

Oh yeah totally, that “feature” of a Duracell cr2032 battery screwed me over in that exact case. They just don’t work at all with an AirTag (battery bought from afaik reputable supplier, Home Depot). Switched to Energizer cr2032 and it’s been great.


Nice, thanks. What are the different options (log streams?) you can select? I read the info box but it isn’t super clear. I figure the numbers are a year - how come there are 2027 ones with data being populated ? And how come something like ‘Argon2025h2’ also has data from ‘1h’ ago? I would expect data only on the 2026h1 - or are these some kind of shards but with weird year naming ?


Logs are sharded by the expiration date of the certificate, not the issuance date, so you should expect to see growth in shards covering the next 398 days (the maximum lifetime of certificates).

As for the 2025h2 logs, these will not be acquiring any newly-issued certificates, but someone might be copying previously-issued certificates from other logs.


TBH not clear because I'm not clear on it. I believe the naming scheme is nonstandard across providers and not a requirement as part of the standards.


I also don’t understand the back button at all on Safari iOS, I think one version it just stopped doing its one task correctly. It’s messing with my mental model of how I arrived at each tab. Currently:

Safari iOS: Be on a page, tap hold a link, click Open in new tab, go to new tab. The Back button should be grayed out and isn’t, and clicking it closes the tab.

Chrome iOS: Be on a page, tap hold a link, click Open in new tab, go to new tab. Back button correctly grayed out as the tab has nowhere to go back to.


For email app on Mac, I’ve been using MailMate for a few years now and quite like it. Once you sync the mailbox, search is basically instant.

And for even bigger search tasks, Foxtrot Pro is quite good too. Not cheap, but it is fast, and the tool I reach for when I need to find something and when Finder search don’t find it


that Contacts one hits hard, no idea how many hours I wasted trying to figure out what the hell it’s doing.

Just today was looking at Activity Monitor Disk tab, for an unrelated reason - sorted by Bytes Read, lo and behold ‘contactsd’ - the Contacts daemon, is in 2nd spot at ~400 _gigabytes_ read, right after mediaanalysisd. I don’t even remember last time I opened the Contacts app on my Mac. It felt like it’s gonna be another time sink with no solution, so I didn’t even bother to investigate more.


I don’t know about Google Wallet, but for iOS Wallet, it is not possible to create a new entry there yourself as a normal user. It has to be signed with a $99/yr certificate, so this thing does the signing for you. The utility is that whatever you created now lives with the rest of the passes in one place.


Oh, okay, thanks.

So yeah, in Google Wallet you can just add the loyalty card like that (scan the qr/bat ode or type the number), and then have it synchronised to your account (to have it available on your other phone for example).

Sure, not every kind of the pass can be added like this (not movie tickets or boarding passes), but all that matters.


and they are accessible without unlocking your device.


Yep, same with Google wallet. Display boarding pass, lock the device, wake up the phone without unlocking, and it's right there.


How do I get Gemini to be more proactive in finding/double-checking itself against new world information and doing searches?

For that reason I still find chatgpt way better for me, many things I ask it first goes off to do online research and has up to date information - which is surprising as you would expect Google to be way better at this. For example, was asking Gemini 3 Pro recently about how to do something with a “RTX 6000 Blackwell 96GB” card, and it told me this card doesn’t exist and that I probably meant the rtx 6000 ada… Or just today I asked about something on macOS 26.2, and it told me to be cautious as it’s a beta release (it’s not). Whereas with chatgpt I trust the final output more since it very often goes to find live sources and info.


Gemini is bad at this sort of thing but I find all models tend to do this to some degree. You have to know this could be coming and give it indicators to assume that it’s training data is going to be out of date. And it must web search the latest as of today or this month. They aren’t taught to ask themselves “is my understanding of this topic based on info that is likely out of date” but understand after the fact. I usually just get annoyed and low key condescend to it for assuming its old ass training data is sufficient grounding for correcting me.

That epistemic calibration is is something they are capable of thinking through if you point it out. But they aren’t trained to stop and ask/check themselves on how confident do they have a right to be. This is a meta cognitive interrupt that is socialized into girls between 6 and 9 and is socialized into boys between 11-13. While meta cognitive interrupt to calibrate to appropriate confidence levels of knowledge is a cognitive skill that models aren’t taught and humans learn socially by pissing off other humans. It’s why we get pissed off st models when they correct ua with old bad data. Our anger is the training tool to stop doing that. Just that they can’t take in that training signal at inference time


Yeah any time I mention GPT-5, the other models start having panic attacks and correcting it to GPT-4. Even if it's a model name in source code!

They think GPT-5 won't be released until the distant future, but what they don't realize is we have already arrived ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: