laitimes

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence

author:BM Xiaowei
A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence

Text | Xu Yingjin

Recently, Mr. Nick wrote a new book titled A Brief History of Artificial Intelligence (People's Posts and Telecommunications Publishing House, December 2017). Since I have been engaged in the research of the philosophy of artificial intelligence for nearly ten years, it is natural that I should buy a copy and read it as soon as possible. In this book review, I would like to talk specifically about Chapter 9 of the book. Chapter 9 of the book, "Philosophers and Artificial Intelligence," is written primarily to challenge philosophers, especially those who have something to say about artificial intelligence. This is largely based on a certain deep-seated prejudice against philosophy among most science and engineering researchers in China, that is, philosophers should not interfere in our turf.

As a researcher of philosophy of science and Western philosophy, I feel that I do have something to say about whether philosophers are qualified to comment on scientific issues. I admit that philosophers don't have something to say about all scientific and engineering problems. For example, philosophers will not express their opinions on the question "why the J-20 has a canard layout", at least not as a philosopher. However, there is certainly something to be said about the philosophy of psychology, the philosophy of biology, and the philosophy of physics, such as "whether the theory of evolution can be applied to the field of psychology" and "what is the nature of quantum mechanics", which scientists may not have a clear opinion on. Many people ask: As a philosophy student, what qualifications do you have to comment on these issues? The answer is simple, abroad, philosophers who deal with these issues tend to have more than two degrees. For example, experts in neurophilosophy, Paul Churchland and Patricia Churchland, both of whom have deep neuroscience backgrounds, even if you see a Chinese scholar with only a philosophy degree making a poor statement about a scientific issue, you can't infer that the industry as a whole is bad, because the truth may just be that the experts in the industry are not in your circle of friends.

By the same logic, philosophers can certainly speak on the issue of artificial intelligence. The reason is simple: symbolic AI and connectionist AI are not clear about the basic definition of what intelligence is, and there is no unified opinion in the industry on how to do AI. Listen to the opinions of philosophers, probably not bad. One might ask: the problem is that philosophers can't even write a single line of program, so why should we listen to philosophers? Two answers are enough to refute the question.

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence

Pollock

First, how do you know that philosophers can't write programs? For example, John Pollock, a heavyweight scholar in epistemological research, has developed a reasoning system called "Oscar", and the relevant research results have been published in mainstream artificial intelligence journals. For another example, David Chalmers, a well-known philosopher of mind in the Anglo-American philosophical circles, is a disciple of Hofstadt, the great god of artificial intelligence at Indiana University Bloomington, and has published artificial intelligence papers with his teacher. Doesn't he know how to write programs?

Second, to express an opinion on artificial intelligence, do you have to be able to write a program to express your opinion? As a low-level operation, writing concrete code is similar to the simplest shooting action in the army. But think about it: Mao Zedong was able to defeat Chiang Kai-shek's million-strong army because of his ability to strategize, or because he was proficient in shooting? The answer is undoubtedly the former. Obviously, philosophy is to the low-level operation of artificial intelligence what Mao Zedong's strategic thought is to tactical actions such as shooting.

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence

Chalmers

But as I mentioned earlier, Mr. Nick clearly doesn't value philosophers as much as I do. In the ninth chapter of the book, he lists three philosophers who have intersected with artificial intelligence and critiques them one by one. The first is Hubert Dreyfus, a phenomenologist who tries to critique symbolic artificial intelligence with the help of Heidegger's intellectual resources; the second is John Searle, a philosopher of language who tries to refute the possibility of strong AI through the "Chinese room" argument; The third is Hilary Putnam, an analytical pragmatist who tries to prove the correctness of semantic externalism through the "brain in a bowl" thought experiment. But from an argumentative point of view, there is a clear risk of "inadequate generalization" in Nick's discussion: can these three philosophers represent the general philosophical view of AI? For example, the author says nothing about Chalmers and Pollock, mentioned above. In the case of Searle's "Chinese Room" thought experiment mentioned by the author, he seems to completely ignore a basic fact related to the argument: most of the hundreds of English-language philosophical papers that can be found online commenting on Searle's thought experiment are criticisms of Searle. In this case, is it a bit biased to take Searle's view as a typical philosopher as his own?

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence

Hill

In addition to the question of inductive incompleteness, Mr. Nick's second question is: does he really understand the work of the philosophers he criticizes? Putnam, for example, was actually a very good mathematician, and his research on analytic hierarchy process is mentioned in the literature on general computing theory, and the "Davis-Putnam algorithm" mentioned in the computer literature also embodies Putnam's brainchild (which later evolved into the DPLL algorithm). It is true that Putnam showed some hostility towards AI in his later years, but his early research on the concept of "multiple realizability" actually provided a basic means of expression for the discourse framework of the topic of strong AI. In Nick's description, Putnam's side as an AI fancier is largely erased, leaving him with a stupid, cartoonish figure of a science layman.

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence

Putnam

More misunderstandings arise in the author's description of Dreyfus's thoughts. The author seems to be very dismissive of the real ideological background of German philosophy, Heidegger's philosophy, which is not supported by algorithmic explanations, and is purely the nonsense of those who sell powerful pills and dog skin plasters. To be honest, I'm not 100% opposed to Nick's critique of Heidegger's philosophy on this issue. As a researcher of Anglo-American analytic philosophy, I am sometimes mad at Heidegger's way of formulating. But unlike Mr. Nick, I do not doubt that Heidegger's philosophy says at least some very important things, although I do not entirely agree with how the mainstream of "Heidegger's circle" articulates these insights more clearly. The author's positive view is that as long as Heidegger's philosophical ideas can be "translated" more clearly, his insights will be more easily absorbed by those in the field of empirical science.

- So, what should be done with this "translation"? Roughly speaking, one of the fundamental points of Heidegger's phenomenology is that the Western philosophical tradition is concerned with the "being" rather than with the "being" itself. And his new philosophy wants to re-reveal this forgotten "existence". I admit that this is Heidegger's "philosophical term", and it is really incomprehensible without explanation. But in principle, they are not unexplainable. Let me try to explain it in ordinary Chinese.

The so-called "existence" is something that can be explicitly objectified in linguistic representations. For example, propositions, truth values, subjects, and objects are all such beings. But "being" itself is difficult to objectify explicitly in linguistic representations, such as the vague background knowledge you rely on when using metaphors. Can you tell the background of the joke as clearly as you can list ten fingers? Can you figure out a clear line between background knowledge and non-background knowledge? That's the downside of traditional AI. True human intelligence relies on this ambiguous background knowledge, and programmers can't write programs without making things clear. This constitutes a great tension between the human phenomenological experience and the mechanical presuppositions of machine writing.

One might say: why should machines care about the phenomenological experience of humans? Artificial intelligence is not cloning, so it can completely ignore how humans perceive the world? The following answer to this very superficial question is sufficient: Why do we want to build artificial intelligence? Isn't it just a helper for humans? Let's say you need to build a robot that can move to help you move, don't you want it to listen to your commands? - For example, the following command: "Hey, Applejack the robot, you bring that thing over there, and then go over there and get another thing." Obviously, this command contains a large number of directional pronouns, the specific meaning of which can only be determined in a specific context. In this case, how can you not expect the bot to share the same contextual perception as you? How can you tolerate your robot being a monster on another space-time scale? Since such a robot must possess a situational consciousness similar to that of a human, don't some of the basic structures of human phenomenological experience revealed by Heidegger's philosophy also apply, in a sense, to real artificial intelligence entities?

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence

Heidegger

One might ask: how can an algorithmic structure be found to achieve the above insights of Heidegger's philosophy? For example, how to characterize the "possible structure of existence" at the algorithmic level? If you can't give an algorithmic structure, isn't that just a waste of time? But think clearly, this request should be made to artificial intelligence, not to philosophers. Or to put it another way: Heidegger's philosophical insights can be said to be a distillation of the "user expectations" of human users for AI, and the burden of realizing these expectations should be placed on the shoulders of AI workers. This is like saying that if the military asks the aircraft development unit to build a stealth fighter, then the task of how to design the aircraft should be the responsibility of the development unit, not the military. In other words: you can't accuse users of not being qualified to put forward "user requirements" just because they don't understand the technical details, just as you can't accuse military representatives of their incompetence in writing design bids for military aircraft because they don't understand certain details of aircraft design. So if we, like Mr. Nick, take down the Heideggerians simply because they don't have algorithmic support, then we could just do the same thing about dismantling global consumer rights organizations – how much do consumers know about the technical details? And precisely because the conclusion of this "fallacy method" reasoning is absurd, we can infer that Mr. Nick is shifting the blame that should have been placed on the shoulders of AI researchers onto philosophers, thereby shifting the blame and blaming the good guys.

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence

Dreyfus

Another key point of Dreyfus's critique of AI is that AI philosophers, even if they subjectively do not want to pay attention to philosophy, will objectively unconsciously presuppose certain philosophical positions - and it is precisely because of their lack of philosophical appreciation that the philosophical positions they unconsciously adopt are often very low-end. For example, the basic ideas of Minsky's framework study were left behind by Husserl long ago, and were criticized by Husserl's disciple Heidegger. But Mr. Nick disagrees with this comment. In his view, philosophers are narcissists, believing that other people's thoughts originate from themselves. In other words, Minsky's thought can be completely independent of Husserl's framework, and there is no need to mention Husserl's name in this context.

I think Mr. Nick has fallen into a serious misunderstanding of the philosopher's point of view. Dreyfus, of course, was not illustrated, and it was because he had read Husserl that he designed his framework. Rather, it is to say that some kind of misconception is so widespread in Western intellectual circles that philosophers and engineers are unwittingly influenced by it, even though the engineers themselves may not be aware that philosophers have similar ideas. And it is precisely because philosophers have expressed similar misconceptions more concisely and systematically that it is possible to explain the problem thoroughly by discussing the problem at a philosophical level.

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence

Husserl

Of course, my support for Dreyfus is limited, and in a sense, I'm more radical than he is. I agree with his criticism of so-called symbolic AI, but can't accept his enthusiasm for neural network technology. More precisely, neural networks are inflexible enough to switch between different problem domains (e.g., a system that can play Go can't be used directly to deal with stocks), nor can they effectively handle the flexibility and creativity of syntactic generation (because pure statistics can't predict new combinations of meaning) – a problem that cognitive scientist Zenon Pilisin and the recently deceased philosopher Jerry Fodo criticized back in 1988 (Mr. Nick barely mentions Fodo, the famous philosopher of cognitive science, throughout the book). In other words, even if I personally literally acknowledge the feasibility of the term "Heideggerian artificial intelligence", my estimate of this baseline height is more pessimistic than Dreyfus's.

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence

Fodor

Leaving aside Mr. Nick's misunderstanding of Dreyfus, his misunderstanding of some other heavyweight philosophers, such as Wittgenstein's later philosophy, is also surprising. For example, he argues that Terry Winograd's World of Blocks is close to Wittgenstein's late philosophy of language. This is actually a conclusion that will make anyone with even the slightest knowledge of the history of analytic philosophy laugh. Wittgenstein's later "circle of friends" were all members of the Austen and Strauss schools of everyday linguistics, and their favorite thing to do was to pull clean syntactic analysis back to the ground of everyday pragmatics full of chaos and swamps, and they showed great detachment from any axiomatic ideas. Given the stark axiomatic undertones behind the program of The World of Cubes, it may be more reliable to see this approach as an analogy to Wittgenstein's earlier Treatise on the Philosophy of Logic. It can be seen from this that although the author may not be unfamiliar with some of the gossip about Wittgenstein's life, he certainly has not read Philosophical Investigations, let alone my own book Mind, Language and Machine: A Dialogue between Wittgenstein's Philosophy and Artificial Intelligence Science (published by the People's Publishing House in October 2013).

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence

Mind, Language and Machine: A Dialogue between Wittgenstein's Philosophy and the Science of Artificial Intelligence

Having said all this about Mr. Nick's misconceptions about philosophy, I would also like to mention his neglect of cognitive science so as not to give the impression that I am too "philosophically self-centered". In fact, cognitive science was born in the West shortly after the Dartmouth Conference in 1956, and 1956 was actually the "twin year" of artificial intelligence and cognitive science. But throughout the book, Mr. Nick seems to have made little mention of cognitive science. For example, the founder of artificial intelligence, H. Simon's research on "bounded rationality" actually straddles the triple meaning of artificial intelligence, cognitive psychology and economics, otherwise he would not have been a double winner of the Turing Award and the Nobel Prize in Economics. But the author seems indifferent to Simon's work in this area (attention economists!). Mr. Nick not only despises us philosophers, but also looks down on you). Thankfully, I don't reject cognitive science and economics in the same way that Nick rejects philosophy. Readers who want to understand similar ideological backgrounds are advised to read my related popular science book "Cognitive Bias" (Fudan University Press, December 2015).

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence

Cognitive biases

Moreover, it is precisely because of Mr. Nick's lack of discussion of the relationship between cognitive science and artificial intelligence science that the structure of his entire book seems very scattered. His emphasis on proving machine theorems robbed him of space to discuss other important topics, such as Bayesian networks (the work of Judea Pearl, the inventor of Bayesian networks and winner of the Turing Award, was also overlooked). His discussion of neural network technology also ignores the latest developments in deep learning technology in this direction (for example, when he discusses "AlphaGo", he only briefly mentions some related technologies, but does not give a good introduction to the work of deep learning expert Geoffrey Hinton). His introduction to the Turing machine, the basics of computational theory, is placed in Chapter 10. It's like a Japanese teacher teaching the most difficult honorifics in Japanese in the first lesson, and then waiting until the tenth lesson to teach the most basic 50 sounds. Of course, this book also has a unique contribution. For example, some of the details disclosed in Chapter 4 of the book when introducing Japan's fifth-generation computer plan cannot be found in ordinary Chinese books, and it would be nice if the other chapters in the book were arranged so reasonably!

Finally, the author would like to make two extended comments. First, philosophy is of course related to artificial intelligence, despite the fact that there are not many philosophers who have the ability to discuss the topic of artificial intelligence. But the relationship between the two is first and foremost a normative proposition, not a factual proposition, and the latter cannot lead to the former—for example, if you start from the point that "there were very few foreign language talents in late Qing China", you cannot launch "late Qing China did not need foreign language talents". In the same way, the author uses Wikipedia's literature statistics system to conclude that "the existing philosophical literature has little relevance to the AI literature", and cannot conclude that "AI does not need philosophers to interject".

Second, if readers really want to understand the history of the interaction between AI and cognitive science more systematically, they still have to read books written by philosophers of cognitive science, because philosophers of cognitive science are cultivated across philosophy and cognitive science, and are more easily immune to narrow academic bias. And about books in this area, in addition to my brazen recommendation of my book "Mind, Language and Machines", I also want to recommend the famous book "Mind as a Machine: A History of Cognitive Science" by the British senior philosopher of cognitive science Margaret Borden, which is also a must-read (unfortunately this book does not have a Chinese version, by the way, the author of this book has a multidisciplinary background in computer science, medicine and philosophy, and has personal relationships with many big names in the history of artificial intelligence development, and is the big coffee among the big names). If the reader can compare Mrs. Boden's book with Nick's, I am afraid that the difference in quality between the "J-20" and the "J-7" will be immediately apparent. However, Mr. Nick's book makes no mention of Mrs. Borden's 1,631-page book.

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence

The Mind as a Machine: A History of Cognitive Science

The author admits that the author's view of "intervening philosophy in artificial intelligence" is not the mainstream voice in China's current public opinion circle. On the topic of artificial intelligence, the voice of China's mainstream public opinion circle is probably driven by the power of capital, and there is always a huge tension between the urgent expectation of profit returns in the capital community and the "slow and logical" style of work that philosophers have repeatedly scrutinized. But perhaps it is precisely because of this that the author feels that it is all the more necessary for philosophers to speak out. The determination of all those who go against the wind comes from the confidence that the wind will change. The author does not lack this confidence.

Xu Yingjin

Professor, School of Philosophy, Fudan University

Chinese Chairman of the 2018 World Congress of Philosophy "Philosophy of Artificial Intelligence" Sub-forum

·Ending·

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence

This article was first published in The Paper Shanghai Book Review, please download the Paper APP to subscribe, click "Read the original article" in the lower left corner to visit the Shanghai Book Review homepage (shrb.thepaper.cn).

Read on