Bad Boy of AI: Hubert L. Dreyfus
- Baris Yalin Uzunlu (Alumni)
- Dec 2, 2024
- 5 min read

If I were to ask, ‘What is the biggest agenda in the world of technology today?’, everyone would probably give the same answer in unison: Artificial intelligence. Well, if I were to ask, ‘On what assumption do artificial intelligence studies basically proceed?’, I think the answer of the vast majority would be as follows: Intelligence can be defined by symbols in a logical order.
But what if this assumption is not true? What if intelligence is largely intuitive? Will all the years of work, all the efforts, all the money spent have been in vain? Perhaps, but the general consensus is still that intelligence can be expressed mathematically, and all studies are proceeding in this direction.
But not everyone has to agree, and they don't agree anyway. And as in every field, those who oppose the majority view are doomed to be ostracised and despised until they prove their arguments scientifically. The opposing view in artificial intelligence is that intelligence is largely intuitive and cannot be described mathematically with symbols. This article is about Hubert L. Dreyfus, the biggest defender of this view.
Let's talk about his biography very briefly: Hubert Dreyfus was born on 15 October 1929 in the United States of America. In 1951 he graduated with high honours from Harvard University. His thesis is very interesting: Causality and Quantum Theory. He received his master's and doctorate degrees from the same university. Between 1953-54, he studied at the University of Freiburg as a visiting student (I cannot be proud enough as a Freiburg graduate). He wrote legendary books such as Alchemy and AI (1965), What Computers Can't Do (1972), Mind over Machine (1986). He was one of the first names that come to mind when it comes to the philosophy of artificial intelligence.
Dreyfus was never a popular figure in the world of artificial intelligence. When he expressed his already extraordinary ideas in a very harsh and cynical style (he likened artificial intelligence studies to alchemy), he received the biggest criticism from his own colleagues. So much so that Herbert Simon rejected Dreyfus' arguments as “rubbish” in science writer Pamela McCorduck's book Machines Who Think, which can be described as a history of artificial intelligence. The people he worked with were even afraid to have lunch with him for fear that it might affect their careers badly. Let me relate an interesting incident: Hubert Dreyfus, in his famous book ‘What Computers Cant Do’, mentioned that chess programmes were still very weak and that in 1965 there was still no programme that could beat an amateur chess player. Two years later, Richard Greenblatt, one of the legendary MIT computer programmers and hackers, wrote a programme called Mac Hack and a chess match was arranged between Dreyfus and the programme. The result: Dreyfus lost the match (he didn't play badly). In the same year, Mac Hack VI was made an honorary member of the United States Chess Federation.
In order to fully absorb what Hubert Dreyfus wants to tell us, it is useful to internalise the four basic assumptions on which artificial intelligence research is based. Let us explain them briefly:
Biological Assumption
The human brain is a complex neural network with on/off switching features, just like a computer. The brain is like computer hardware and the mind is like computer software.
Psychological Assumption
The mind is a system in which bits of information are processed according to formal rules. It works by performing discrete computations (in the form of algorithmic rules) on discrete representations or symbols.
Epistemological Assumption
Information can be formulated using Boolean algebra for processing in computers. All activities (by animate or inanimate objects) can be formalised (mathematically) in the form of predictive rules or laws.
Ontological Assumption
The world consists of self-contained facts that self-contained symbols can represent. Knowledge is always discrete, deterministic, open and independent.
Dreyfus attacked artificial intelligence studies mostly on the epistemological assumption. Let us elaborate on this assumption:
Epistemology is the ‘science of knowledge’. The equivalent of knowledge in the digital world is data and is expressed in ‘bits’. Each letter I press on my computer keyboard is a ‘byte’ sized piece of information stored on my computer's hard drive. The epistemological assumption assumes that all information can be formalised, i.e. formatted. That is, that it can be expressed in numbers and symbols. But in 1958, Michael Polanyi wrote a book entitled ‘Personal Knowledge: Towards a Post-Critical Philosophy’, he introduced the concept of Tacit Knowledge and claimed that the vast majority of our knowledge is tacit. Tacit knowledge means the knowledge we acquire based on our experiences in our private and professional lives. Dreyfus argues that most of the knowledge we acquire in our lives is tacit, that is, it cannot be formulated.
He gives cycling as an example: We have all ridden a bicycle when we were little. To prevent the bicycle from tipping to the left, we break the handlebars to the left. And to the right to prevent it from tipping to the right. Balancing on the saddle is the result of controlling the handlebars in this way. Most of us (and no child) know that the angle at which we turn the handlebars in instances of imbalance is inversely proportional to the speed of the bike. But not knowing this does not make us bad cyclists, and knowing it does not make us good riders. One more example: We are all skilful at walking. But how many of us know which leg muscles work when walking, at what angle we should move our legs, etc. And does knowing these things make us walk better?
As Polanyi puts it: We know much more than we can tell.
On top of that, according to Dreyfus, people tend to adhere less to rules as their level of competence in a job increases. In 1980, in an article co-authored with his brother Stuart entitled ‘A Five-Stage Model of the Mental Activities Involved in Directed Skill Acquisition’, he described five different stages on the path from novice to expert and analysed the role of intuition in decision-making processes. For the sake of completeness of meaning, I have used the terms in the original language:
Novice: The first stage of development. The starting point. At this stage the person needs instructions and his/her behaviour is shaped by predetermined rules. The person needs monitoring and outside intervention in order to develop.
Beginner: Second level. We can think of it as a beginner who has gained a little more experience. The person understands some nuances and the parts begin to take shape in his/her head.
Competent Professional: Third level. The person still needs rules and guidelines, although not as much as before.
Proficient Professional: Fourth level. Analytical thinking is replaced by natural reaction. One's actions are intuitive and natural, most situations are easily recognised and new situations are quickly adapted to. Aircraft pilots who reach this level think of themselves flying rather than flying the aircraft.
Expert: The last level. The person no longer needs rules. Instant decisions are made based entirely on experience and intuition.
For more than 70 years, studies have been carried out in the field of artificial intelligence. We still haven't reached the second level. According to Dreyfus, intuition will never be programmed, so it is not possible to develop an artificial intelligence at the expert level. At least, not from this point of view. Although unpopular, time proves Dreyfus right.
Comments