The important difference is that humans are trained on a lot less data than ChatGPT. This implies that the human brain and LLMs are very different, the human brain likely has a lot of language faculties pre-encoded (this is the main argument of Universal Grammar). OpenAI's GPT 4 is now trained on visual data.
Anyway, I think a lot of ongoing conversations have orthogonal arguments. ChatGPT can be both impressive and generate topics broader than the average human while not giving us deeper insight into how human language works.
Based on the current advances, in about a year we should see the first real-world interaction robot that learns from its environment (probably Tesla or OpenAI).
I'm curious (just leaving it here to see what happens in the future), what will be the excuse of Google this time.
This is again the same situation: Google has supposedly superior tech but not releasing it (or maybe it's as good as Bard...)