Without engaging in the whole "anthropomorphizing" debate in this post, I'll say I reject the framing, for many reasons I'd be happy to discuss.
At the same time I understand what you mean and I agree that no, this does not give any LLM any sense of anything, in the same way that we conceive it. But it provides them context with take for granted in service of further customizing their outputs.
I am not so concerned about the anthropomorphizing language, which is technically incorrect but forgivable in communication, but with the practical factor that incorporating words or data points about time are not actually expressed in an experiential time dimension...
I would like to see timeline comprehension. Maybe this is that, but I couldn't tell and I kind of doubt it.
Yes, I see your point: You are saying that embedding time points doesn't equate with giving an understanding of time. I think you're right.
Part of the point of the article is that the process of giving LLMs awareness of specific context is useful and is a step-by-step process:
1. Provide access to data: Claude, while they have access to the date from their system prompt, does not get a timestamp for each message. As a result, even if they had the ability to reason about time, they would not be able to, as the data is not provided to them.
2. Provide tools to manipulate the data: Claude, on their own, is a probabilistic text model that cannot do computations even as simple as 1+1=2 for provable reasons (they don't have access to external memory). In the same way, as you point out, they cannot manipulate, compare, sort the temporal data points that they are provided, without tools. That's why we provide them those tools to make those operations.
3. Provide tools to translate context: Claude, on their own, might not be able to connect information about timestamps to anything else in its corpus, so it's important to translate the datetimes in other forms, such as timelapses ("1 minute and 12 seconds ago") or descriptions of what you might do ("commute").
4. Provide prompts to metacognitively reflect: Claude, with the data points and tools, will only factor in the time on a per-message basis, but with no appreciation of the global timeline. That's why you have to prime that metacognitive process with a prompt, "Looking back at the chronology of this conversation, through our timestamps, what can you infer about the timeline."
This MCP server was inspired by a very long session I had with Claude and GPT while working on a programming competition. I worked with them for executive functioning — as I have a lot of trouble with the 80/20 principle, and they are helpful in helping me know what is the right amount of effort to invest given the time left.
In that context, it was difficult that I had to keep reexplaining to the models what the time was, how much time was left before the deadline, etc.. By building this MCP server, I provided the models with the ability to reflect about this directly without me having to provide the information directly.
I hope this helps. GPT is telling me the HN style is not verbose, but I am not sure what details to cut.
For those looking for "a calendar", here is one[0] I made from a stylized orrery. No AI. Should be printable to US Letter paper. Enjoy.
EDIT: former title asserted that the LLM built a calendar
[0] https://ouruboroi.com/calendar/2026-01-01