I like they learned these adjustments didn't 'work'. My concern is what if OpenAI is to do subtle A/B testing based on previous interactions and optimize interactions based on users personality/mood? Maybe not telling you 'shit on a stick' is awesome idea, but being able to steer you towards a conclusion sort of like [1].
[0]: https://github.com/anthropics/claude-code/issues/14375
reply