The models may be somewhat frozen in time but with the right tools available to it they don't need all information innately coded into it. If they're able to query for reliable information to drag in they can talk about things that are well outside their original training data.
For a few months of news this works, but over the span of years even the statistical nature of language drifts a bit. Have you shipped natural language models to production? Even simple classifiers need to be updated periodically because of drift. There is no world where you lead the industry serving LLMs and don't train them as well.