An original source for the Heideggerian critique of symbolic AI projects is Hubert Dreyfus, a philosophy professor at MIT who specialized in Heidegger and argued that his colleagues in the CS department were codifying just the kind of naive views on cognition that Heidegger spent his life criticizing.
See his books “What Computers Can’t Do”, “Being-in-the-world”, and the paper “Why Heideggerian AI Failed and Why Fixing It Would Require Making it More Heideggerian” (something like that).
A basic point is that ordinary human coping does not involve conceptual thinking, schematic rules, or the manipulation of symbols. It’s sort of like “Thinking Fast and Slow”.
We do not fundamentally live by constantly consulting our inner symbolic representation of the world, though we do that too. The more fundamental way of being is to just cope and care directly without explicit cognitive representation.
So I could attempt to codify an “expert system” for my way of coping with and caring for my cat, let’s say. But it would only be a kind of symbolic ghost of my real way of being, and it would never be sufficient. The more precise I tried to make it, the more complex it would become, until it became a gigantic mess, because it’s fundamentally an inaccurate model.
Dreyfus “Being-in-the-world” brings up many examples of the way the intelligence of daily life is informal, unconscious, and nonsymbolic. The way we maintain distance from other bodies which is only roughly approximated by the idea of a “personal space”, or the ways in which we live out masculinity and femininity.
“There are no beliefs to get clear about; there are only skills and practices. These practices do not arise from beliefs, rules, or principles, and so there is nothing to make explicit or spell out. We can only give an interpretation of the interpretation already in the practices.”
“Being and Time seeks to show that much of everyday activity, of the human way of being, can be described without recourse to deliberate, self-referential consciousness, and to show how such everyday day activity can disclose the world and discover things in it without containing any explicit or implicit experience of the separation of the mental from the world of bodies and things.”
“The traditional approach to skills as theories has gained attention with the supposed success of expert systems. If expert systems based on rules elicited from experts were, indeed, successful in converting knowing-how into knowing-that, it would be a strong vindication of the philosophical tradition and a severe blow to Heidegger's contention that there is no evidence for the traditional claim that skills can be reconstructed in terms of knowledge. Happily for Heidegger, it turns out that no expert system can do as well as the experts whose supposed rules it is running with great speed and accuracy. Thus the work on expert systems supports Heidegger's claim that the facts and rules ‘discovered’ in the detached attitude do not capture the skills manifest in circumspective coping.”
The question then is whether the kind of work a philosopher supposedly does—formal, conscious, symbolic—is especially fundamental to intelligence. Like, is the mind in its basic function similar to an analytic philosopher or logician? In order to make artificial intelligence, should we try to develop a simulation of a logician?
But not even philosophers actually work in the schematic way of an AI based on formal logic...
I think dichotomy between "formal, conscious, symbolic" and "informal, unconscious, non-symbolic" may be false. We will find out in a few hundred years when AI matures. Of course I don't think we will have an AGI based on first order logic a la 1960s efforts. On the other hand, deep neural networks are not that far from "informal, unconscious, non-symbolic", but are still based on formal and symbolic foundations.
Well, every dichotomy is false, probably even the dichotomy between dichotomies and non-dichotomies...
Dreyfus’s critique is about the first order (or whatever) logic programs, and I don’t think neural nets are cognitivistic in the same way, but there’s also the point that until they live in the human world as persons they will never have “human-like intelligence”.
I think it’s interesting to think of AI in a kind of post-Heideggerian way that includes the possibility that it can be desirable or necessary for us human beings to submit and “lower” ourselves to robotic or “artificial” systems, reducing the need for the AIs to actually attain humanistic ways of being. If the self-driving cars are confused by human behaviors, we can forbid humans on the roads, let’s say. Or humans might find it somehow nice to let themselves act within a robotic system, like maybe the authentic Heideggerian being in the world is also a source of anxiety (anxiety was a big theme for Heidegger after all).
See his books “What Computers Can’t Do”, “Being-in-the-world”, and the paper “Why Heideggerian AI Failed and Why Fixing It Would Require Making it More Heideggerian” (something like that).
A basic point is that ordinary human coping does not involve conceptual thinking, schematic rules, or the manipulation of symbols. It’s sort of like “Thinking Fast and Slow”.
We do not fundamentally live by constantly consulting our inner symbolic representation of the world, though we do that too. The more fundamental way of being is to just cope and care directly without explicit cognitive representation.
So I could attempt to codify an “expert system” for my way of coping with and caring for my cat, let’s say. But it would only be a kind of symbolic ghost of my real way of being, and it would never be sufficient. The more precise I tried to make it, the more complex it would become, until it became a gigantic mess, because it’s fundamentally an inaccurate model.
Dreyfus “Being-in-the-world” brings up many examples of the way the intelligence of daily life is informal, unconscious, and nonsymbolic. The way we maintain distance from other bodies which is only roughly approximated by the idea of a “personal space”, or the ways in which we live out masculinity and femininity.
“There are no beliefs to get clear about; there are only skills and practices. These practices do not arise from beliefs, rules, or principles, and so there is nothing to make explicit or spell out. We can only give an interpretation of the interpretation already in the practices.”
“Being and Time seeks to show that much of everyday activity, of the human way of being, can be described without recourse to deliberate, self-referential consciousness, and to show how such everyday day activity can disclose the world and discover things in it without containing any explicit or implicit experience of the separation of the mental from the world of bodies and things.”
“The traditional approach to skills as theories has gained attention with the supposed success of expert systems. If expert systems based on rules elicited from experts were, indeed, successful in converting knowing-how into knowing-that, it would be a strong vindication of the philosophical tradition and a severe blow to Heidegger's contention that there is no evidence for the traditional claim that skills can be reconstructed in terms of knowledge. Happily for Heidegger, it turns out that no expert system can do as well as the experts whose supposed rules it is running with great speed and accuracy. Thus the work on expert systems supports Heidegger's claim that the facts and rules ‘discovered’ in the detached attitude do not capture the skills manifest in circumspective coping.”