Category Archives: computer

Doing science backwards

A recent article, (Trettenbrein, P. (2016); The Demise of the Synapse As the Locus of Memory: A Looming Paradigm Shift?; Frontiers in Systems Neuroscience, 10), questions what many consider settled science – plastic changes to synapses are the basis of learning and memory – may not be correct. Thanks to Neurosceptic for noting this paper (here).

Actually, as of today, large parts of the field have concluded, primarily drawing on work in neuroscience, that neither symbolism nor computationalism are tenable and, as a consequence, have turned elsewhere. In contrast, classical cognitive scientists have always been critical of connectionist or network approaches to cognitive architecture.”Trettenbrein is in the classical cognitive scientist camp.

First Trettenbrein assumes that the brain is a Turing machine. In other words that the coinage of thought is symbols and that they are manipulated by algorithms (programs) that write to a stable memory and read from it. The brain is assumed to deal in representation/symbols as variables, stepwise procedures as programs and random access memory, giving together a Turing machine. “The crucial feature of a Turing machine is its memory component: the (hypothetical) machine must possess a read/write memory in order to be vastly more capable than a machine that remembers the past only by changing the state of the processor, as does, for example, a finite-state machine without read/write memory. Thus, there must be an efficient way of storing symbols in memory (i.e., writing), locating symbols in memory (i.e., addressing), and transporting symbols to the computational machinery (i.e., reading). It is exactly this problem, argue Gallistel and King (2009), that has by and large been overlooked or ignored by neuroscientists. …

Synaptic plasticity is widely considered to be the neurobiological basis of learning and memory by neuroscientists and researchers in adjacent fields, though diverging opinions are increasingly being recognized. From the perspective of what we might call “classical cognitive science” it has always been understood that the mind/brain is to be considered a computational-representational system. Proponents of the information-processing approach to cognitive science have long been critical of connectionist or network approaches to (neuro-)cognitive architecture, pointing to the shortcomings of the associative psychology that underlies Hebbian learning as well as to the fact that synapses are practically unfit to implement symbols.” So an assumption that we have a Turing machine dictates that it needs a particular type of memory which is difficult to envisage with plastic synapses.

I like many others believe, science starts with observations and moves on to explanations of those observations, or to state it differently, the theories of science are based on physical evidence. It is not science to start with a theoretical assumption and argue from that assumption what has to be. Science starts with ‘what is’ not ‘what has to be’.

Trettenbrein is not thinking that the brain resembles a computer in many ways (computer metaphor), he is thinking that it IS a computer (actual Turing machine). If the brain is an actual computer than it is a Turing machine, working in a stepwise fashion controlled by an algorithmic program. Then he reasons that the memory must be individual neurons that are - what? Perhaps they are addressable items in the random access memory. Well, it seems that he does not know. “To sum up, it can be said that when it comes to answering the question of how information is carried forward in time in the brain we remain largely clueless… the case against synaptic plasticity is convincing, but it should be emphasized that we are currently also still lacking a coherent alternative.” We are not clueless (although there are lots of unknowns) and the case for synaptic plasticity is convincing (as it has convinced many/most scientists) because there is quite a bit of evidence for it. But if someone starts with an assumption, then looks for evidence and finds it hard to produce – they are doing their science backwards.

Trettenbrein is not doing neuroscience, not even biology, in fact not even science. There are a lot of useful metaphors that we use to help understand the brain but we should never get so attached to them that we believe they can take the place of physical evidence from actual brains.

Just because we use the same words does not mean that they describe the same thing. A neurological memory is not the same as a computer memory. Information in the neurological sense is not the same as the defined information of information theory. Brain simulations are not real brains. Metaphors give resemblances not definitions.

Beta waves

Judith Copithorne image

Brain waves are measured for many reasons and they have been linked to various brain activities. But very little is known about how they arise. Are they the result or the cause of the activities they are associated with? How exactly are they produced at a cellular or network level? We know little about these waves.

One type of wave, beta waves (18-25 Hz) are associated with consciousness and alertness. In the motor cortex they are found when muscle contractions are isotonic (contractions that do not produce movement) but are absent just prior and during movement. They are increased during sensory feedback to static motor control and when movement is resisted or voluntarily suppressed. In the frontal cortex the beta waves are found during attention to cognitive tasks directed to the outside world. They are found in alert attentive states, problem solving, judgment, decision making, and concentration. The more involved the cognitive activity the faster the beta waves.

ScienceDaily reports a press release from Brown University on the work of Stephanie Jones and team, who are attempting to understand how beta waves arise. (here) Three types of study are used: MEG recordings, computer models, and implanted electrodes in animals.

The MEG recordings from the somatosensory cortex (sense of touch) and the inferior frontal cortex (higher cognition) showed a very distinct form for the beta waves, “they lasted at most a mere 150 milliseconds and had a characteristic wave shape, featuring a large, steep valley in the middle of the wave.” This wave form was recreated in a computer model of the layers of the cortex. “They found that they could closely replicate the shape of the beta waves in the model by delivering two kinds of excitatory synaptic stimulation to distinct layers in the cortical columns of cells: one that was weak and broad in duration to the lower layers, contacting spiny dendrites on the pyramidal neurons close to the cell body; and another that was stronger and briefer, lasting 50 milliseconds (i.e., one beta period), to the upper layers, contacting dendrites farther away from the cell body. The strong distal drive created the valley in the waveform that determined the beta frequency. Meanwhile they tried to model other hypotheses about how beta waves emerge, but found those unsuccessful.” The model was tested in mice and rhesus monkeys with implanted electrodes and was supported.

Where do the signals come from that drive the pyramidal neurons? The thalamus is a reasonable guess at the source. Thalamo-cortex-thalamus feedback loop makes those very contacts of the thalamus axons within the cortex layers. The thalamus is known to have signals with 50 millisecond duration. All of the sensory and motor information that enters the cortex (except smell) comes though the thalamus. It regulates consciousness, alertness and sleep. It is involved in processing sensory input and voluntary motor control. It has a hand in language and some types of memory.

The team is continuing their study. “With a new biophysical theory of how the waves emerge, the researchers hope the field can now investigate beta rhythms affect or merely reflect behavior and disease. Jones’s team in collaboration with Professor of neuroscience Christopher Moore at Brown is now testing predictions from the theory that beta may decrease sensory or motor information processing functions in the brain. New hypotheses are that the inputs that create beta may also stimulate inhibitory neurons in the top layers of the cortex, or that they may may saturate the activity of the pyramidal neurons, thereby reducing their ability to process information; or that the thalamic bursts that give rise to beta occupy the thalamus to the point where it doesn’t pass information along to the cortex.

It seems very clear that understanding of overall brain function will depend on understanding the events at a cellular/circuit level; and that those processes in the cortex will not be understood without including other regions like the thalamus in the models.

Powerful Induction

In an article in the Scientific American (here) Shermer points to ‘consilience of inductions’ or ‘convergence of evidence’. This is a principle that I have held for many, many years. Observations, theories and explanations are only trustworthy when they stop being a string of a few ‘facts’ and become a tissue or fabric of a great many independent ‘facts’.

I find it hard to take purely deductive arguments seriously – they are like rope bridges across a gap. They depend on every link in the argument and more importantly on the mooring points at either end. A causeway across the same gap does not depend on any single rock – it is dependable.

There is one theory that is put forward often and, to many, is ‘proven’, that is that brains can be duplicated with a computer. The reasoning goes something like: all computers are Turin machines, any program on a Turin machine can be duplicated on any other Turin machine, brains are computers and therefore Turin machines and can be duplicated on other computers. I see this as a very thin linear string of steps.

Step one is a somewhat circular argument in that being a Turin machine seems to be the definition of a ‘proper’ computer and so yes, all of those computers are Turin machines. What if there are other machines that do something that resembles computing but that are not Turin machines? Step two is pretty solid – unless someone disproves it which is unlikely but possible. The unlikely does happen; for example, someone did question the obvious ‘parallel lines do not meet’ to give us non-Euclidian geometry. Step three is the problem. Is the brain a computer in the sense of a Turin machine? People have said things like, “Well, brains do compute things so they are computers.” But no one has shown that any machine that can do any particular computation by any means is a Turin machine.

No one can say exactly how the brain does its thinking. But there are good reasons to question whether the brain does things step-wise using algorithms. In many ways the brain resembles an analog machine using massively parallel processing. The usual answer is that any processing method can be simulated on a digital algorithmic machine. There is a difference between duplication and simulation. No one says that a Turin machine can duplicate any other machine via a simulation. In fact, it is probable that this is not possible.

This is the sort of argument, a deductive one, that is hardly worth making. We will get somewhere with induction. It takes time: many experimental studies, methods have to be developed, models created and tested etc. But in the end it will be believable – we can trust that understanding because it is the product of a web or fabric of dependent inductions.

 

Simplifying assumptions

There is an old joke about a group of horse betters putting out a tender to scientists for a plan to predict the results of races. A group of biologists submitted a plan to genetically breed a horse that would always win. It would take decades and cost billions. A group of statisticians submitted a plan to devise a computer program to predict races. It would cost millions and would only predict a little over chance. But a group of physicists said they could do it for a few thousand. They would be able to have the program finished in just a few weeks. The betters wanted to know how they could be so quick and cheap. “Well, we have equations for how the race variables interact. It’s a complex equation but we have made simplifying assumptions. First we said let each horse be a perfect rolling sphere. Then…

For over 3 decades ideas have appeared about how the brain must work from studies of electronic neural nets. These studies usually make a lot of assumptions. First, they assume that the only active cells in the brain are the neurons. Second, the neurons are simple (they have inputs which can be weighted and if the sum of the weighted inputs is over a threshold, the neuron fires its output signals) and there is only one type (or a very, very few different types). Third, the connections between the neurons are only structured in very simple and often statistically driven nets. There is only so much that can be learned about the real brain from this model.

But on the basis of electronic neural nets and information theory with, I believe, only a small input from the physiology of real brains, it became accepted that the brain used a ‘sparse coding’. What does this mean? At one end of a spectrum, the information held in a network depends on the state of just one neuron. This coding is sometimes referred to as grandmother cells because one and only one neuron would code for your grandmother. If the information depends on the state of all the neurons or in other words your grandmother would be coded by a particular pattern of activity that includes the states of all the neurons, that is the other end of the spectrum. Sparse coding uses only a few neurons so is near the grandmother cell end of the spectrum.

Since the 1980s it has generally been accepted that the brain uses sparse coding. But experiments with actual brains have been showing that it may not be the case. A recent paper (Anton Spanne, Henrik Jörntell. Questioning the role of sparse coding in the brain. Trends in Neurosciences, 2015; DOI: 10.1016/j.tins.2015.05.005) argues that it may not be sparse after all.

It was assumed that the brain would use the coding system that gives the lowest total activity without losing functionality. But that is not what the brain actually does. It has higher activity that it theoretically needs. This is probably because the brain sits in a fairly active state even at rest (a sort of knife edge) where it can quickly react to situations.

If sparse coding were to apply, it would entail a series of negative consequences for the brain. The largest and most significant consequence is that the brain would not be able to generalize, but only learn exactly what was happening on a specific occasion. Instead, we think that a large number of connections between our nerve cells are maintained in a state of readiness to be activated, enabling the brain to learn things in a reasonable time when we search for links between various phenomena in the world around us. This capacity to generalize is the most important property for learning.

Here is the abstract:

Highlights

  • Sparse coding is questioned on both theoretical and experimental grounds.
  • Generalization is important to current brain models but is weak under sparse coding.
  • The beneficial properties ascribed to sparse coding can be achieved by alternative means.

Coding principles are central to understanding the organization of brain circuitry. Sparse coding offers several advantages, but a near-consensus has developed that it only has beneficial properties, and these are partially unique to sparse coding. We find that these advantages come at the cost of several trade-offs, with the lower capacity for generalization being especially problematic, and the value of sparse coding as a measure and its experimental support are both questionable. Furthermore, silent synapses and inhibitory interneurons can permit learning speed and memory capacity that was previously ascribed to sparse coding only. Combining these properties without exaggerated sparse coding improves the capacity for generalization and facilitates learning of models of a complex and high-dimensional reality.

New method - BWAS

There is a report of a new method of analyzing fMRI scans – using enormous sets of data and giving very clear results. Brain-wide association analysis (BWAS for short) was used in a comparison of autistic and normal brains in a recent paper (citation below).

The scan data is divided into 47,636 small areas of the brain, voxels, and then these are analyzed in pairs, each voxel with all other voxels. This gives 1,134,570,430 data points for each brain. This sort of analysis has been done in the past but only for restricted areas of the brain and not the whole brain. The method was devised by J. Feng, University of Warwick, Computer Department.

This first paper featuring the method shows its strengths. Cheng and others used data from over 900 existing scans from various sources that had matched autistic and normal pairs. The results are in the abstract below. (This blog does not usually deal with information on autism and similar conditions but tries to keep to normal function; I am not a physician. So the results are not being discussed, just the new method.)

A flow chart of the brain-wide association study [termed BWAS, in line with genome-wide association studies (GWAS)] is shown in Fig. 1. This ‘discovery’ approach tests for differences between patients and controls in the connectivity of every pair of brain voxels at a whole-brain level. Unlike previous seed-based or independent components-based approaches, this method has the advantage of being fully unbiased, in that the connectivity of all brain voxels can be compared, not just selected brain regions. Additionally, we investigated clinical associations between the identified abnormal circuitry and symptom severity; and we also investigated the extent to which the analysis can reliably discriminate between patients and controls using a pattern classification approach. Further, we confirmed that our findings were robust by split data cross-validations.” FC = functional connectivity; ROI = region of interest.

The results are very clear and have a very good statistical probability.

Abstract: “Whole-brain voxel-based unbiased resting state functional connectivity was analysed in 418 subjects with autism and 509 matched typically developing individuals. We identified a key system in the middle temporal gyrus/superior temporal sulcus region that has reduced cortical functional connectivity (and increased with the medial thalamus), which is implicated in face expression processing involved in social behaviour. This system has reduced functional connectivity with the ventromedial prefrontal cortex, which is implicated in emotion and social communication. The middle temporal gyrus system is also implicated in theory of mind processing. We also identified in autism a second key system in the precuneus/superior parietal lobule region with reduced functional connectivity, which is implicated in spatial functions including of oneself, and of the spatial environment. It is proposed that these two types of functionality, face expression-related, and of one’s self and the environment, are important components of the computations involved in theory of mind, whether of oneself or of others, and that reduced connectivity within and between these regions may make a major contribution to the symptoms of autism.
ResearchBlogging.org

Cheng, W., Rolls, E., Gu, H., Zhang, J., & Feng, J. (2015). Autism: reduced connectivity between cortical areas involved in face expression, theory of mind, and the sense of self Brain DOI: 10.1093/brain/awv051

Mind to mind transfer

 

I read the abstract of a new paper (see citation below) about brain-to-brain communication. I had been thinking while I read the title that we already do brain-to-brain communication – it’s called language. And sure enough the first sentence of the abstract said, “Human sensory and motor systems provide the natural means for the exchange of information between individuals, and, hence, the basis for human civilization.” What Grau and others were aiming for and succeeded in doing was to bypass language, motor output or peripheral sensory input without invading the skulls – from conscious thought-to-conscious thought via computer based hardware. “The main differences of this work relative to previous brain-to brain research are a) the use of human emitter and receiver subjects, b) the use of fully non-invasive technology and c) the conscious nature of the communicated content. Indeed, we may use the term mind-to-mind transmission here as opposed to brain-to-brain, because both the origin and the destination of the communication involved the conscious activity of the subjects.”Their abstract is below.

But lets look at how we do mind-to-mind now. We have to share a language, and to a large extent that means we also have to share a good deal of a culture. For normal human communication, it takes a fairly rich language and culture. It the case of the paper’s experiment, the language was patterns of ls and 0s. The sender and his equipment output the pattern and the receiver with his equipment input it. And to understand that the patterns were meaningful required a cultural agreement on their meaning.

It is the language/culture part that is important to the communication. It is as if I utter a phrase which has meaning to me, you hear the phrase, and with it I seem to reach into your brain to pick out that meaning and put it into your stream of consciousness. Without the shared language and culture this trick would not be possible. If anyone thinks that his thoughts can be loaded into a computer and delivered to someone else’s brain by some means that avoids a shared language/culture of some type – he will be disappointed.

Abstract:

Human sensory and motor systems provide the natural means for the exchange of information between individuals, and, hence, the basis for human civilization. The recent development of brain-computer interfaces (BCI) has provided an important element for the creation of brain-to-brain communication systems, and precise brain stimulation techniques are now available for the realization of non-invasive computer-brain interfaces (CBI). These technologies, BCI and CBI, can be combined to realize the vision of non-invasive, computer-mediated brain-to-brain (B2B) communication between subjects (hyperinteraction). Here we demonstrate the conscious transmission of information between human brains through the intact scalp and without intervention of motor or peripheral sensory systems. Pseudo-random binary streams encoding words were transmitted between the minds of emitter and receiver subjects separated by great distances, representing the realization of the first human brain-to-brain interface. In a series of experiments, we established internet-mediated B2B communication by combining a BCI based on voluntary motor imagery-controlled electroencephalographic (EEG) changes with a CBI inducing the conscious perception of phosphenes (light flashes) through neuronavigated, robotized transcranial magnetic stimulation (TMS), with special care taken to block sensory (tactile, visual or auditory) cues. Our results provide a critical proof-of-principle demonstration for the development of conscious B2B communication technologies. More fully developed, related implementations will open new research venues in cognitive, social and clinical neuroscience and the scientific study of consciousness. We envision that hyperinteraction technologies will eventually have a profound impact on the social structure of our civilization and raise important ethical issues.

Note: Some in the press have been calling this transfer telepathy. It is not telepathy!!

ResearchBlogging.org

Grau C, Ginhoux R, Riera A, Nguyen TL, Chauvat H, Berg M, Amengual JL, Pascual-Leone A, & Ruffini G (2014). Conscious Brain-to-Brain Communication in Humans Using Non-Invasive Technologies. PloS one, 9 (8) PMID: 25137064

The John paper 3

 

This is the third post about this paper, E. Roy John; The neurophysics of consciousness; Brain Research Reviews, 39, 2002 pp 1-28. One of the things that stands out in the paper is the idea of a ‘field’ theory of consciousness. John takes time to look at a Quantum theory and the Tononi-Edelman theory to illustrate other ways of looking at non-local brain activity.

Other contemporary theorists have recognized the need to focus upon the system rather than its individual elements. An electrical field must be generated by synchronized oscillations and the resulting inhomogeneity of ionic charge distribution within the space of the brain. Llinas and his colleagues suggest that consciousness is inherent in a synchronized state of the brain, modulated by sensory inputs. Libet proposed that subjective experience may arise from a field emerging from neural synchrony and coherence, not reducible to any known physical process. Squires suggested that consciousness may be a primitive ingredient of the world, i.e. not reducible to other properties in physics, and includes the qualia of experience. Others have proposed that consciousness arises within a dynamic core, a persisting reverberation of interactions in an ensemble of neurons which maintains unity even if its composition is constantly changing. ”

He leans towards the Tononi-Edelman picture and the emergence of consciousness from global brain activity. “This paper illustrates the increasingly recognized need to consider global as well as local processes in the search for better explanations of how the brain accomplishes the transformation from synchronous and distributed neuronal discharges to seamless global subjective awareness.

John says that consciousness is analog in nature (or a combination of digital local activity and analog non-local activity). What exactly is meant by analog mechanisms? An analog is a mimic of the system you want to solve or understand. The elements and the relations between elements are all represented in the analog. Analogs are physical copies. One of the most famous analogs is an electrical circuit and a hydraulic circuit. There are pairs of elements and the same forms of equation describing behavior. Voltage is like the head of pressure and so on. In teaching it is used both ways as some things are easier to comprehend in water and some in electrical current. The same elements and equations can be used in a mechanical analog or a pneumatic one. Analogs can be used to make calculations. The analog is a real physical system with real behavior and its values are continuous rather than digital. One of the great advantages of analog computers (electrical analogs of other systems built anew on a patch board for each problem or calculation) was that they did iterative problems in a flash. Digital computers soon were able to do iteration very quickly and patch boards became a thing of the past. The brain, however, is not a lightening fast thing. If it was doing iteration it would take significant time.

The brain is faced with a large number of semi-independent pieces of information from the senses, the memory, previous predictions, motor programs, knowledge of the world, on-going tasks/goals etc. These pieces of information are held by a huge number of cells. These cells have contact with many others and that contact is specific to each pair of cells. Step by step algorithms are not going to make a moment of perception out of that mass in less than several minutes, maybe much longer, because much of the work is iterative. But that mass using massive parallel and overlapping feedback loops can make an analog of the world in that moment in ‘a flash’. Signals may fly in all directions but the whole thing will only be stable in a few best-fit-scenarios and once in a stable point, will stay there. Presto, a global perception, including in its scope all the constraints and also not losing or degrading the original pieces of information (the qualia and feelings).

But then there is the ‘almighty leap’ – how is this perception shared and how are we made consciously aware of it. The ‘hard problem’ is not the qualia but the awareness. John skips over this. His explanation in its shortest form is:

CONSCIOUSNESS EMERGES FROM RESONATING ORGANIZED ENERGY: Simultaneously, the global perception is projected to the consciousness system. - subjective awareness of the percept emerges as a property of an electrical field resonating throughout the consciousness system.”

Now, many neuroscientists are convinced that the ‘unitary self’ is an illusion created from many selves such as an internal sense self, a motor self, an external sense self and so on. And why do we have an illusionary unitary self – well, to be aware of consciousness, to have subjective awareness. But if the subjective self is an illusion, why can’t the awareness by an illusion too.

After all, now every cell in the created analog is, if effect, in possession of the only part of the analog that it can possess. You could say it was ‘aware’ of the analog from its point of view. There was nothing about the new perceptual moment that it needs to be told.

But if self and awareness are illusions. What is this illusion in aid of? I would guess that it is needed for a useful memory. One perceptual moment has to be tied to another to construct a narrative, a biographical narrative so that there can be a longer term continuity in our thought and action.

That is the end of my posting on the John paper.

 

Can we upload our brains to computers?

 

Some year’s ago Chris Chatham posted a look at the differences between a brain and a computer (Chatham post) and recently Steven Donne re-visited the idea in a post (Donne post) These are both interesting reading.

I part company with Donne on several points. The first has to due with the definition of computer. Some people define ‘computer’ so widely that it includes anything that computes anything. In that case the brain is a computer and there is no metaphor to examine. On the other hand it is reasonable to include more than the stock home or business computer. Super-computers, robotic computers and those that are just around the corner are metaphor material. Donne brings up computers that are built precisely to mimic and explore the brain - simulations of the brain. As a metaphor this is lame. If I build a replica of something, there is nothing to be gained in understanding by a metaphor between the original and the replica. So we are left with brain simulations in fairly conventional but advanced computers or some more faithful replica of the brain.

Second, Donne feels that there will not be a problem with size and appeals to the idea that computing power increases exponentially so it cannot be all that long before a computer could be built that would handle a brain simulation in real time. He points to a 1 second of brain activity having been simulated. Well, that should be ‘sort-of-simulated’. The 1 second took 40 minutes to compute. (factor of 2400) Then the brain activity for the simulation was a simple network exercise – not really brain activity, missing the complications of real brain physiology. (factor of ?) The amount of brain simulated was small – 1.73 billion neurons simulated with about 83000 processors. (factor of 50) 10.4 trillion synapses were modeled. (factor 100+). I assume that the glia calcium ion communication, magnetic and chemical fields and so on were not part of the simulation. (factor ?) So I am assuming that something like 5 million times the size of this simulation would be needed for a realistic one and that would be 40-50 years of Moore’s Law type exponential growth at a bare minimum. But this would not give a brain-receiving computer that could accept the upload of a real human brain. That is a much bigger problem than a standard simulation. There would have to be an understanding of how and where all information was held in that human brain, a way to ‘read it out’ and place it in the simulation so that it has the same usefulness. Are we going to understand the brain at that level within 50 years – maybe but I doubt it.

Thirdly, Donne says that if it is possible, it will happen. I think that is possible - once. But the idea that anyone who wants to be immortal could just have their brain up-loaded on death is plainly silly. It would be too expensive to do more than a few times even if it were possible. I can imagine what would happen the first time there was not enough ‘power’ for both the living people and the simulated brains. The power would be switched off of some simulations. It seems the height of arrogance for someone to assume that they have the right to be immortal and to have future generations honour that right. The people at a time more than 50 years into the future will have more pressing problems, given current predictions of climate change, population growth, resource depletion, pollution, more destructive wars and whatever else is in store. Immortal brains in simulations seem to me part of the optimistic myopic vision of the science fiction lovers – futures of space travel, infinite resources, even time travel. Humans will be lucky to live through the century without being reduced to a rough and hard dark age.

 

Knowing your grandmother

There is a spectrum of ways in which the brain may hold concepts that range from very localized to very distributed, and there is little agreement of where along that spectrum various concepts are held. At the one end is the ultimate local storage: a single ‘grandmother’ neuron that recognizes your grandmother in matter how she is presented (words, images, sounds, actions etc.). This single cell, if it exists, would literally be the concept of grandmother. At the other end of the spectrum is a completely distributed storage where a concept is a unique pattern of activity across the whole cortex with every cell being involved in the pattern of many concepts. Both of these extremes have problems. Our concept of grandmother does not disappear if part of the cortex is destroyed – no extrmely small area has ever been found that obliterates grandma. On the other hand, groups of cells have been found that are relatively tuned to one concept. When we look at the extreme of distributed storage, there is the problem of localized specialties such as the fusiform face area. And more telling, is the problem of a global pattern being destroyed if multiple concepts are activated at the same time. Each neuron would be involved in a significant fraction of all the concepts and so there would be confusion if a dozen or more concepts were part of a thought/memory/process. As we leave the extremes the local storage becomes a larger group of neurons with more distribution and the distributed storage becomes patterns in smaller groups of neurons.

 

 

The idea of localized concepts was thought to be improbable in the 70’s and the grandmother cell became something of a joke. The type of network that computer scientists were creating became the assumed architecture of the brain.

 

 

Computer simulations have long used a ‘neural network’ called PDP or parallel distributed processing. This is not a network made of neurons, in spite of the name, but a mathematical network. Put extremely simply there are layers of units; each unit has a value for its level of activity; the units have inputs from other units and outputs to other units; the connections between units can be weighted in their strength. The bottom layer of units takes input from the experimenter and this travels through ‘hidden’ layers to an output layer which reveals the output to the experimenter. Such a setup can learn and compute in various ways that depend of the programs that control the weightings and other parameters. This PDP model has favoured the distributed network idea when modeling actual biological networks. Some researchers have made a PDP network do more than one thing at once (but ironically this entails having more localization in the hidden layer). This might seem a small problem for PDP but PDP does suffer from a limitation that makes rapid one-trial learning difficult. That type of learning is the basis of episodic memory. Because each unit in PDP is involved in many representations – any change in weighting affects most of those representations and so it takes many iterations to get the new representation worked into the system. Rapid one-trial learning in PDP destroys previous learning; this is termed catastrophic interference or the stability-plasticity dilemma. The answer has been that the hippocampus may have a largely local arrangement for its fast one-trial learning but the rest of the cortex can have a dense distribution. But there is a problem. When a fully distributed network tries to represent more than one thing it has problems of ambiguity. This is a real problem because the cortex does not handle one concept at a time – in fact, it handles many concepts at once and often some are novel. There is no way that thought processes could work with this kind of chaos. This can be overcome in PDP networks but again the fix is to move towards local representations.

 

 

This is the abstract from a paper to be published soon (citation below).

A key insight from 50 years of neurophysiology is that some neurons in cortex respond to information in a highly selective manner. Why is this? We argue that selective representations support the co-activation of multiple “things” (e.g., words, objects, faces) in short-term memory, whereas nonselective codes are often unsuitable for this purpose. That is, the co-activation of nonselective codes often results in a blend pattern that is ambiguous; the so-called superposition catastrophe. We show that a recurrent parallel distributed processing network trained to code for multiple words at the same time over the same set of units learns localist letter and word codes, and the number of localist codes scales with the level of the superposition. Given that many cortical systems are required to co-activate multiple things in short-term memory, we suggest that the superposition constraint plays a role in explaining the existence of selective codes in cortex.

 

 

The result is that our model of the brain moves a good way along the spectrum toward the grandmother cell end. And lately there has been a new methods to study the brain. Epilepsy patients have electrodes placed in their brains to monitor seizures prior to surgery. These patients can volunteer for experiments while waiting for their operations. So it is now possible to record the activity of small groups of neurons in awake functioning human beings. And something very similar to grandmother cells have been found. Some electrodes respond to a particular person – Halle Berry and Jennifer Aniston were two of the first concepts to be found to each have their own local patch of a hundred or so neurons. There was a response in these cells to not just various images, but written names and voices too. It happened with objects as well as people. This home of concepts held as small local groups of neurons has been observed in the area of the hippocampus.

 

 

The idea that the brain was one great non-localized network has also suffered from the results of brain scans. Areas of the brain (far from the hippocampus) appear to be specialized. Very specific functions can be lost completely by the destruction of smallish areas of the brain as a result of stroke. The old reasons for rejecting a localized brain organization are disappearing while the arguments against a globally distributed organization are growing. This does not mean that there is no distributed operations or that there are unique single cells for a concept – it just means that we are well to the local end of the spectrum.

 

 

Rodrigo Quian Quiroga, Itzhak Fried and Christof Koch wrote a recent piece in the Scientific American (here) in which they look at this question and explain what it means for memory. The whole article is very interesting and worth looking at.

Concept cells link perception to memory; they give an abstract and sparse representation of semantic knowledge—the people, places, objects, all the meaningful concepts that make up our individual worlds. They constitute the building blocks for the memories of facts and events of our lives. Their elegant coding scheme allows our minds to leave aside countless unimportant details and extract meaning that can be used to make new associations and memories. They encode what is critical to retain from our experiences. Concept cells are not quite like the grandmother cells that Lettvin envisioned, but they may be an important physical basis of human cognitive abilities, the hardware components of thought and memory.

ResearchBlogging.org

Bowers JS, Vankov II, Damian MF, & Davis CJ (2014). Neural Networks Learn Highly Selective Representations in Order to Overcome the Superposition Catastrophe. Psychological review PMID: 24564411

The Edge Question 3

I am continuing my read-through of some responses to the Edge Question: What Scientific Idea is ready for retirement? The question was asked by Laurie Santos this year. (here) One of the most popular answers was a rejection of the computer metaphor for the brain. There were also complaints about the idea of human rationality as used in economic theory. Interestingly, the three responses about the computer metaphor were made by computer experts.

 

Schank feels Artificial Intelligence should be shelved as a goal and we should just make better computer applications rather than mimic the human mind.

 

Roger Schank (Psychologist & Computer Scientist; Engines for Education Inc.; Author, Teaching Minds: How Cognitive Science Can Save Our Schools)

 

It was always a terrible name, but it was also a bad idea. Bad ideas come and go but this particular idea, that we would build machines that are just like people, has captivated popular culture for a long time. Nearly every year, a new movie with a new kind of robot that is just like a person appears in the movies or in fiction. But that robot will never appear in reality. It is not that Artificial Intelligence has failed, no one actually ever tried. (There I have said it.)…The fact is that the name AI made outsiders to AI imagine goals for AI that AI never had. The founders of AI (with the exception of Marvin Minsky) were obsessed with chess playing, and problem solving (the Tower of Hanoi problem was a big one.) A machine that plays chess well does just that, it isn’t thinking nor is it smart….I declare Artificial Intelligence dead. The field should be renamed ” the attempt to get computers to do really cool stuff” but of course it won’t be….There really is no need to create artificial humans anyway. We have enough real ones already.”

 

Brooks discusses the shortcomings of the Computational Metaphor that has become so popular in cognitive science.

 

Rodney A. Brooks (Roboticist; Panasonic Professor of Robotics (emeritus) , MIT; Founder, Chairman & CTO, Rethink Robotics; Author, Flesh and Machines)

 

But does the metaphor of the day have impact on the science of the day? I claim that it does, and that the computational metaphor leads researchers to ask questions today that will one day seem quaint, at best….The computational model of neurons of the last sixty plus years excluded the need to understand the role of glial cells in the behavior of the brain, or the diffusion of small molecules effecting nearby neurons, or hormones as ways that different parts of neural systems effect each other, or the continuous generation of new neurons, or countless other things we have not yet thought of. They did not fit within the computational metaphor, so for many they might as well not exist. The new mechanisms that we do discover outside of straight computational metaphors get pasted on to computational models but it is becoming unwieldy, and worse, that unwieldiness is hard to see for those steeped in its traditions, racing along to make new publishable increments to our understanding. I suspect that we will be freer to make new discoveries when the computational metaphor is replaced by metaphors that help us understand the role of the brain as part of a behaving system in the world. I have no clue what those metaphors will look like, but the history of science tells us that they will eventually come along.”

 

Gelernter tackles the question of whether the Grand Analogy of computer and brain is going to help in understanding the brain.

 

David Gelernter (Computer Scientist, Yale University; Chief Scientist, Mirror Worlds Technologies; Author, America-Lite: How Imperial Academia Dismantled our Culture (and ushered in the Obamacrats))

 

Today computationalists and cognitive scientists—those researchers who see digital computing as a model for human thought and the mind—are nearly unanimous in believing the Grand Analogy and teaching it to their students. And whether you accept it or not, the analogy is milestone of modern intellectual history. It partly explains why a solid majority of contemporary computationalists and cognitive scientists believe that eventually, you will be able to give your laptop a (real not simulated) mind by downloading and executing the right software app. …”

 

Gelernter gives his reasons for this conclusion. (One) “The software-computer system relates to the world in a fundamentally different way from the mind-brain system. Software moves easily among digital computers, but each human mind is (so far) wedded permanently to one brain. The relationship between software and the world at large is arbitrary, determined by the programmer; the relationship between mind and world is an expression of personality and human nature, and no one can re-arrange it…. (Two) The Grand Analogy presupposes that minds are machines, or virtual machines—but a mind has two equally-important functions, doing and being; a machine is only for doing. We build machines to act for us. Minds are different: yours might be wholly quiet, doing (“computing”) nothing; yet you might be feeling miserable or exalted—or you might merely be conscious. Emotions in particular are not actions, they are ways to be. … (Three) The process of growing up is innate to the idea of human being. Social interactions and body structure change over time, and the two sets of changes are intimately connected. A toddler who can walk is treated differently from an infant who can’t. No robot could acquire a human-like mind unless it could grow and change physically, interacting with society as it did…. (Four) Software is inherently recursive; recursive structure is innate to the idea of software. The mind is not and cannot be recursive. A recursive structure incorporates smaller versions of itself: an electronic circuit made of smaller circuits, an algebraic expression built of smaller expressions. Software is a digital computer realized by another digital computer. (You can find plenty of definitions of digital computer.) “Realized by” means made-real-by or embodied-by. The software you build is capable of exactly the same computations as the hardware on which it executes. Hardware is a digital computer realized by electronics (or some equivalent medium)….

 

He wants to stop the pretending. “Computers are fine, but it’s time to return to the mind itself, and stop pretending we have computers for brains; we’d be unfeeling, unconscious zombies if we had.”

 

Another model of human behavior got some criticism. Again it is from within the fold. Levi wants to retire Homo Economicus and then base understanding of our actions on a realistic model of humans.

 

Margaret Levi (Political Scientist, University Professor, University of Washington & University of Sydney)

 

Homo economicus is an old idea and a wrong idea, deserving a burial of pomp and circumstance but a burial nonetheless. …The theories and models derived from the assumption of homo economicus generally depend on a second, equally problematic assumption: full rationality….Even if individuals can do no better than “satisfice,” that wonderful Simon term, they might still be narrowly self-interested, albeit—because of cognitive limitations—ineffective in achieving their ends. This perspective, which is at the heart of homo economicus, must also be laid to rest. …The power of the concept of Homo economicus was once great, but its power has now waned, to be succeeded by new and better paradigms and approaches grounded in more realistic and scientific understandings of the sources of human action.”

 

The notion of rationality and H. econ got another thumbs down from Fiske who wanted to retire Rational Actor Models: the Competence Corollary.

 

Susan Fiske (Eugene Higgins Professor, Department of Psychology, Princeton University)

 

The idea that people operate mainly in the service of narrow self-interest is already moribund, as social psychology and behavioral economics have shown. We now know that people are not rational actors, instead often operating on automatic, based on bias, or happy with hunches. Still, it’s not enough to make us smarter robots, or to accept that we are flawed. The rational actor’s corollary—all we need is to show more competence—also needs to be laid to rest. …People are most effective in social life if we are—and show ourselves to be—both warm and competent. This is not to say that we always get it right, but the intent and the effort must be there. This is also not to say that love is enough, because we do have to prove capable to act on our worthy intentions. The warmth-competence combination supports both short-term cooperation and long-term loyalty. In the end, it’s time to recognize that people survive and thrive with both heart and mind.”

 

It looks like we are on the way to changes in metaphor for human thought and actions. No metaphor is perfect (we cannot expect to find perfect ones) - but there comes a time when an old or inappropriate metaphor is a drag on science.