The Edge Question 3

I am continuing my read-through of some responses to the Edge Question: What Scientific Idea is ready for retirement? The question was asked by Laurie Santos this year. (here) One of the most popular answers was a rejection of the computer metaphor for the brain. There were also complaints about the idea of human rationality as used in economic theory. Interestingly, the three responses about the computer metaphor were made by computer experts.


Schank feels Artificial Intelligence should be shelved as a goal and we should just make better computer applications rather than mimic the human mind.


Roger Schank (Psychologist & Computer Scientist; Engines for Education Inc.; Author, Teaching Minds: How Cognitive Science Can Save Our Schools)


It was always a terrible name, but it was also a bad idea. Bad ideas come and go but this particular idea, that we would build machines that are just like people, has captivated popular culture for a long time. Nearly every year, a new movie with a new kind of robot that is just like a person appears in the movies or in fiction. But that robot will never appear in reality. It is not that Artificial Intelligence has failed, no one actually ever tried. (There I have said it.)…The fact is that the name AI made outsiders to AI imagine goals for AI that AI never had. The founders of AI (with the exception of Marvin Minsky) were obsessed with chess playing, and problem solving (the Tower of Hanoi problem was a big one.) A machine that plays chess well does just that, it isn’t thinking nor is it smart….I declare Artificial Intelligence dead. The field should be renamed ” the attempt to get computers to do really cool stuff” but of course it won’t be….There really is no need to create artificial humans anyway. We have enough real ones already.”


Brooks discusses the shortcomings of the Computational Metaphor that has become so popular in cognitive science.


Rodney A. Brooks (Roboticist; Panasonic Professor of Robotics (emeritus) , MIT; Founder, Chairman & CTO, Rethink Robotics; Author, Flesh and Machines)


But does the metaphor of the day have impact on the science of the day? I claim that it does, and that the computational metaphor leads researchers to ask questions today that will one day seem quaint, at best….The computational model of neurons of the last sixty plus years excluded the need to understand the role of glial cells in the behavior of the brain, or the diffusion of small molecules effecting nearby neurons, or hormones as ways that different parts of neural systems effect each other, or the continuous generation of new neurons, or countless other things we have not yet thought of. They did not fit within the computational metaphor, so for many they might as well not exist. The new mechanisms that we do discover outside of straight computational metaphors get pasted on to computational models but it is becoming unwieldy, and worse, that unwieldiness is hard to see for those steeped in its traditions, racing along to make new publishable increments to our understanding. I suspect that we will be freer to make new discoveries when the computational metaphor is replaced by metaphors that help us understand the role of the brain as part of a behaving system in the world. I have no clue what those metaphors will look like, but the history of science tells us that they will eventually come along.”


Gelernter tackles the question of whether the Grand Analogy of computer and brain is going to help in understanding the brain.


David Gelernter (Computer Scientist, Yale University; Chief Scientist, Mirror Worlds Technologies; Author, America-Lite: How Imperial Academia Dismantled our Culture (and ushered in the Obamacrats))


Today computationalists and cognitive scientists—those researchers who see digital computing as a model for human thought and the mind—are nearly unanimous in believing the Grand Analogy and teaching it to their students. And whether you accept it or not, the analogy is milestone of modern intellectual history. It partly explains why a solid majority of contemporary computationalists and cognitive scientists believe that eventually, you will be able to give your laptop a (real not simulated) mind by downloading and executing the right software app. …”


Gelernter gives his reasons for this conclusion. (One) “The software-computer system relates to the world in a fundamentally different way from the mind-brain system. Software moves easily among digital computers, but each human mind is (so far) wedded permanently to one brain. The relationship between software and the world at large is arbitrary, determined by the programmer; the relationship between mind and world is an expression of personality and human nature, and no one can re-arrange it…. (Two) The Grand Analogy presupposes that minds are machines, or virtual machines—but a mind has two equally-important functions, doing and being; a machine is only for doing. We build machines to act for us. Minds are different: yours might be wholly quiet, doing (“computing”) nothing; yet you might be feeling miserable or exalted—or you might merely be conscious. Emotions in particular are not actions, they are ways to be. … (Three) The process of growing up is innate to the idea of human being. Social interactions and body structure change over time, and the two sets of changes are intimately connected. A toddler who can walk is treated differently from an infant who can’t. No robot could acquire a human-like mind unless it could grow and change physically, interacting with society as it did…. (Four) Software is inherently recursive; recursive structure is innate to the idea of software. The mind is not and cannot be recursive. A recursive structure incorporates smaller versions of itself: an electronic circuit made of smaller circuits, an algebraic expression built of smaller expressions. Software is a digital computer realized by another digital computer. (You can find plenty of definitions of digital computer.) “Realized by” means made-real-by or embodied-by. The software you build is capable of exactly the same computations as the hardware on which it executes. Hardware is a digital computer realized by electronics (or some equivalent medium)….


He wants to stop the pretending. “Computers are fine, but it’s time to return to the mind itself, and stop pretending we have computers for brains; we’d be unfeeling, unconscious zombies if we had.”


Another model of human behavior got some criticism. Again it is from within the fold. Levi wants to retire Homo Economicus and then base understanding of our actions on a realistic model of humans.


Margaret Levi (Political Scientist, University Professor, University of Washington & University of Sydney)


Homo economicus is an old idea and a wrong idea, deserving a burial of pomp and circumstance but a burial nonetheless. …The theories and models derived from the assumption of homo economicus generally depend on a second, equally problematic assumption: full rationality….Even if individuals can do no better than “satisfice,” that wonderful Simon term, they might still be narrowly self-interested, albeit—because of cognitive limitations—ineffective in achieving their ends. This perspective, which is at the heart of homo economicus, must also be laid to rest. …The power of the concept of Homo economicus was once great, but its power has now waned, to be succeeded by new and better paradigms and approaches grounded in more realistic and scientific understandings of the sources of human action.”


The notion of rationality and H. econ got another thumbs down from Fiske who wanted to retire Rational Actor Models: the Competence Corollary.


Susan Fiske (Eugene Higgins Professor, Department of Psychology, Princeton University)


The idea that people operate mainly in the service of narrow self-interest is already moribund, as social psychology and behavioral economics have shown. We now know that people are not rational actors, instead often operating on automatic, based on bias, or happy with hunches. Still, it’s not enough to make us smarter robots, or to accept that we are flawed. The rational actor’s corollary—all we need is to show more competence—also needs to be laid to rest. …People are most effective in social life if we are—and show ourselves to be—both warm and competent. This is not to say that we always get it right, but the intent and the effort must be there. This is also not to say that love is enough, because we do have to prove capable to act on our worthy intentions. The warmth-competence combination supports both short-term cooperation and long-term loyalty. In the end, it’s time to recognize that people survive and thrive with both heart and mind.”


It looks like we are on the way to changes in metaphor for human thought and actions. No metaphor is perfect (we cannot expect to find perfect ones) - but there comes a time when an old or inappropriate metaphor is a drag on science.




Leave a Reply

Your email address will not be published. Required fields are marked *