Publication 25 November 2024
Artificial intelligence and human intelligence, with Daniel Andler
Daniel Andler
Member of the Académie des Sciences Morales et Politiques
First of all, could we come back to your definition of artificial intelligence?
Artificial intelligence (AI) is designed to perform specific tasks on demand. These can be as simple as sorting a list alphabetically, or as complex as playing checkers or chess. Discovering a winning strategy for checkers is like solving a problem. AI is a system that solves problems: a first problem, a second problem, a third problem, then yet another… Each time, it involves particular algorithms. One problem, one algorithm. Another problem, another algorithm. In this, artificial intelligence is defined in opposition to human intelligence: while the former solves problems, the latter manages concrete situations, situations that we experience.
Can you explain what you mean by this?
The essential function of human intelligence is to deal with situations. A situation is what happens to a conscious human being at a given moment. This is exactly what’s happening to me right now. My problem is to give you a quick answer to a complex question. I’m aware of this situation. I’m committed to it. It’s centered on me, with my subjectivity, with my personal stakes. The little intelligence I can mobilize at the moment is to get out of this situation as best I can. I can, for example, approach it by transforming the situation into a problem, with a certain number of constraints. This means that I have to find a solution among all the possible ones. One of them could be to give you a very short answer. In that case, I problematize the situation.
This is where we differ from AI, which is incapable of problematizing. AI solves problems, but is unable to initiate a problematization process. AI is subject to normativity: it must find a correct solution to the problem it has been given. It’s an objective normativity: there’s no room for discussion once we’ve checked whether the solution is right or wrong. When the AI system is “symbolic”, i.e. proceeds according to rules, it can also be objectively verified that it has correctly applied the rules at its disposal. This is not possible with the statistical machine learning systems that are so popular today.
We humans, faced with a given situation, are subject to a strong form of normativity: what we do, how we do it, is more or less satisfactory (that’s normativity), but there’s no strict objective criterion that allows us to verify it: we can always, in principle, discuss it, with ourselves or among ourselves. It’s the same kind of normativity we find in ethics or aesthetics: it’s not a pure matter of preference, but neither is it something we can necessarily decide once and for all (even if in many cases we have little doubt).
On the other hand, there are situations that have no obvious problematization. For example, if you tell me that there’s poverty in the world, or that there’s inequality in the world, or that there’s wickedness in the world, you’re talking about a situation, not a problem in the sense that I understand it, i.e. a precise question to which there is in principle a clear-cut solution, even if we don’t yet know it. No one imagines tomorrow that we’ll be able to solve such a problem, except in a vague sense of “problem” and “solution”: it’s a purely verbal question. World poverty, which must first be defined precisely and objectively, depends on thousands of factors, the resolution of which calls for highly complex discussions between many people, many of whom will disagree on the solutions to be found. These questions are a matter for society as a whole, but the individual also knows about them, from the big choices (profession, partner, lifestyle…) to the smallest ones: inviting the old aunt to a family party, speaking at a banquet, punishing one’s child… : all this depends at once on an immense quantity of considerations, emotions, memories, hopes, and ultimately on an uncertain decision. Artificial intelligence is not equipped to put itself in the place of a human being. For me, it is the specificity of human intelligence to know how to act “in situation”.
Daniel Andler
Member of the Académie des Sciences Morales et Politiques
Isn’t another specificity of human intelligence compared to artificial intelligence that it is “linked” or “coupled” to a body?
For an individual, coping with a situation means coping with his or her entire body. A body that has evolved over time, that has lived, that has a history. It’s the deployment of this body over the course of our lives that drives us to fear losing it, to fear death, and therefore to acquire and develop the intelligence to preserve it.
Intelligence, in this sense, includes emotional and social skills. For a long time, emotions have been considered an obstacle to rationality. Rationality is an irreplaceable tool for solving problems, but emotion can be a valuable ally, as is now widely recognized. In particular, it guides rationality in situations that we, as social beings, constantly encounter. It’s not impossible that artificial intelligence will incorporate certain aspects of the social sciences, but we mustn’t assume that we’ll be able to integrate all our knowledge of the social sciences and humanities into AI systems, which are by definition incomplete and imperfect, and even less so our know-how. In this sense, AI systems will always be limited, so to speak, in their ability to absorb any type of situation.
Coming back to your distinction between “problem” and “situation”, it seems that we’re a long way from some people talking about the development of a “super-intelligence”, or even a “conscious AI”…
I think super-intelligence is a belief born of science fiction. For me, it’s a falsely intangible concept, but one that we end up finding plausible and intelligible because science fiction has been crystallizing this possibility for a century. The question remains, however, whether today’s spectacular advances in artificial intelligence are a sign that it will join human intelligence tomorrow, that once it is built, it will be indistinguishable from it. This is what I sometimes call “synthetic intelligence”. The answer is no, for two reasons.
The first is the one we’ve just mentioned, which has to do with the distinction between a “situation” and a “problem”. The second is linked to the notion of substitute artificial intelligence. Tomorrow, we won’t be replacing a captain with an artificial intelligence system, even if we are able to make automatic landings. In the same way, we won’t be replacing CEOs with AI tomorrow, because AI will never be a perfect ersatz. If people continue to think that this replacement is possible, it’s because they haven’t assimilated the idea that there is no such thing as a perfect ersatz. Every ersatz is imperfect. There is no system, be it artificial intelligence, machine tools, autopilot systems or anything else, that can replace the original system in every case. That’s just the way it is. It’s an empirical lesson. I don’t think we can find a single counter-example. Of course, there may be confirmation strategies that tell us “look, the system worked very well, probably better than a human agent”. But that’s forgetting the cases where this substitute system would completely collapse, because we haven’t taken into account all the possible contingencies when we set up the ersatz. There can be no such thing as a perfect ersatz. And if there is no perfect ersatz, then we can’t have an AI system that literally possesses human intelligence.
Daniel Andler
Member of the Académie des Sciences Morales et Politiques
So I guess you have the same point of view about the possibility of a “conscious AI”?
We need to agree on what we mean by “consciousness”. It’s both a concept and a phenomenon that we don’t know how to qualify clearly. Like emotions, understanding, autonomy. All these concepts are mobilized when a human experiences a situation, not when an artificial intelligence solves a problem.
I think that the very fact of being able to raise the question of the possibility of a conscious AI has the consequence of maintaining the illusion of this possibility. Can a machine think? Can a machine have consciousness? Yes and no. I don’t think we can decide. We can put forward ideas, we can debate them, we can ask questions, but don’t expect us to come up with answers quickly. Those who do are not serious.
A few days before you spoke at our Annual General Meeting, we experienced the “Scarlett Johansson episode”, when she accused OpenAI of knowingly copying her voice without her knowledge. How did you deal with this situation?
Regarding this example, let me be very clear: people who pass off artificial robotic or multimedia systems as real people, by reproducing their voices for example, should be thrown into prison with no chance of getting out. That’s a crime. Perhaps I’m being a little blunt here, but I’m using the words of the American philosopher Daniel Dennett. He was the first philosopher to really take artificial intelligence seriously and say that it was something absolutely new, very important, very positive. Recently, he said that just as counterfeit money destroys real money, and the creators of counterfeit money should be put in jail, those who manufacture fake people should also be put in jail. Why? Because introducing fake people into human society will completely destroy its foundations. The trust we have in a human person is essentially driven by the same springs that make us trust money. To turn a non-person into a person is to run a great risk. That was Dennett’s opinion, and I share it.
In our previous discussion, we touched on a number of topics, including education, with which you are very familiar. Today, what do you think needs to change in our education systems to prepare future generations for AI?
In fact, I worked on the introduction of digital technology into schools, at a time when schools were completely closed to digital technology. Teachers were using digital technology like crazy at home, but certainly not at school. Now that digital technology has made its way into schools, I think the obvious answer is that we need a resource center for educators at various levels, so that they can train themselves and transmit a relatively sane and reasonable image of the digital universe, the mechanisms at work, how attention is captured, what AI is, and so on. I think we absolutely need a culture, a “litéracie” of the digital world, including AI. Literacy”, a term adopted from English, refers not to advanced computer skills, but to the digital equivalent of what illiterate people lack in terms of reading and writing. What’s more, we need to catch up in terms of scientific literacy. What I’m proposing in terms of pedagogy is “educational bilingualism”: in other words, being just as capable of working with technology as without it. Why is this?
Firstly, because the tools are fallible. While students need to be trained in these tools, because they’ll be facing competition the day they leave school and university, and will be at a disadvantage if they don’t know how to use AI resources, they also need to learn to do without them. Because, from time to time, tools disappear. What happens if someone runs out of battery on their phone and can’t read a map?
This question ties in with my second point, which is that pupils and students are very impoverished in mental techniques. Let’s take ChatGPT as an example. Many today use it as a mental crutch, to help them in their thinking. Obviously, you have an initial ChatGPT production which is interesting, which you’re going to rework, putting your human intelligence, your critical mind, your culture, your originality, etc. into it. But it’s actually dramatic, because this first draft that ChatGPT provides you with is the moment when you learn to think. We’re all afraid of the blank page, but we have to overcome this kind of intellectual laziness that blocks us. If you don’t overcome it, it’s a muscle you don’t exercise. And a muscle you don’t exercise is a muscle you lose.
That’s why I’m in favor of a moderate and enlightened use of AI. AI should be used when it’s really needed. If I draw a parallel, we don’t go to the pharmacy to buy everything that’s the most expensive and the most modern, telling ourselves “you never know, it might be useful”. We go to the chemist when we really feel the need. We should go to the “AI pharmacy” for the same reasons.