>Knowledge models, like ontologies, always seem suspect to me; like they promise a schema for crisp binary facts, when the world is full of probabilistic and fuzzy information loosely categorized by fallible humans based on an ever slowly shifting social consensus.
I don't disagree that the world is full of fuzziness. But the problem I have with this portrayal is that formal models are often normative rather than analytical. They create reality rather than being an interpretation or abstraction of reality.
People may well have a fuzzy idea of how their credit card works, but how it really works is formally defined by financial institutions. And this is not just true for software products. It's also largely true for manufactured products. Our world is very much shaped by artifacts and man-made rules.
Our probabilistic, fuzzy concepts are often simply a misconception. That doesn't mean it's not important of course. It is important for an AI to understand how people talk about things even if their idea of how these things work is flawed.
And then there is the sort of semi-formal language used in legal or scientific contexts that often has to be translated into formal models before it can become effective. Law makers almost never write algorithms (when they do, they are often buggy). But tax authorities and accounting software vendors do have to formally model the language in the law and then potentially change those formal definitions after court decisions.
My point is that the way in which the modeled, formal world interacts with probabilistic, fuzzy language and human actions is complex. In my opinion we will always need both. AIs ultimately need to understand both and be able to combine them just like (competent) humans do. AI "tool use" is a stop-gap. It's not a sufficient level of understanding.
> People may well have a fuzzy idea of how their credit card works, but how it really works is formally defined by financial institutions.
> Our probabilistic, fuzzy concepts are often simply a misconception.
How eg a credit card works today is defined by financial institutions. How it might work tomorrow is defined by politics, incentives, and human action. It's not clear how to model those with formal language.
I think most systems we interact with are fuzzy because they are in a continual state of change due to the aforementioned human society factors.
To some degree I think that our widely used formal languages may just be insufficient and could be improved to better describe change.
But ultimately I agree with you that this entire societal process is just categorically different. It's simply not a description or definition of something, and therefore the question of how formal it can be doesn't really make sense.
Formalisms are tools for a specific but limited purpose. I think we need those tools. Trying to replace them with something fuzzy makes no sense to me either.
I believe the formalisms can be constructed by something fuzzy. Humans are fuzzy; they create imperefect formalisms that work until they break, and then they're abandoned or adapted.
I don't see how LLMs are significantly different. I don't think the formalisms are an "other". I believe they could be tools, both leveraged and maintained by the LLM, in much the same way as most software engineers, when faced with a tricky problem that is amenable to brute force computation, will write up a quick script to answer it rather than try and work it out by hand.
I think AI could do this in principle but I haven't seen a convincing demonstration or argument that Transformer based LLMs can do it.
I believe what makes the current Transformer based systems different to humans is that they cannot reliably decide to simulate a deterministic machine while linking the individual steps and the outcomes of that application to the expectations and goals that live in the fuzzy parts of our cognitive system. They cannot think about why the outcome is undesirable and what the smallest possible change would be to make it work.
When we ask them to do things like that, they can do _something_, but it is clearly based on having learned how people talk about it rather than actually applying the formalism themselves. That's why their performance drops off a cliff as soon as the learned patterns get too sparse (I'm sure there's a better term for this that any LLM would be able to tell you :)
Before developing new formalisms you first have to be able to reason properly. Reasoning requires two things. Being able to learn a formalism without examples. And keeping track of the state of a handful of variables while deterministically applying transformation rules.
The fact that the reasoning performance of LLMs drops off a cliff after a number of steps tells me that they are not really reasoning. The 1000th rules based transformation only depending on the previous state of the system should not be more difficult or error prone than the first one, because every step _is_ the first one in a sense. There is no such cliff-edge for humans.
I don't disagree that the world is full of fuzziness. But the problem I have with this portrayal is that formal models are often normative rather than analytical. They create reality rather than being an interpretation or abstraction of reality.
People may well have a fuzzy idea of how their credit card works, but how it really works is formally defined by financial institutions. And this is not just true for software products. It's also largely true for manufactured products. Our world is very much shaped by artifacts and man-made rules.
Our probabilistic, fuzzy concepts are often simply a misconception. That doesn't mean it's not important of course. It is important for an AI to understand how people talk about things even if their idea of how these things work is flawed.
And then there is the sort of semi-formal language used in legal or scientific contexts that often has to be translated into formal models before it can become effective. Law makers almost never write algorithms (when they do, they are often buggy). But tax authorities and accounting software vendors do have to formally model the language in the law and then potentially change those formal definitions after court decisions.
My point is that the way in which the modeled, formal world interacts with probabilistic, fuzzy language and human actions is complex. In my opinion we will always need both. AIs ultimately need to understand both and be able to combine them just like (competent) humans do. AI "tool use" is a stop-gap. It's not a sufficient level of understanding.