Simplifying assumptions

There is an old joke about a group of horse betters putting out a tender to scientists for a plan to predict the results of races. A group of biologists submitted a plan to genetically breed a horse that would always win. It would take decades and cost billions. A group of statisticians submitted a plan to devise a computer program to predict races. It would cost millions and would only predict a little over chance. But a group of physicists said they could do it for a few thousand. They would be able to have the program finished in just a few weeks. The betters wanted to know how they could be so quick and cheap. “Well, we have equations for how the race variables interact. It’s a complex equation but we have made simplifying assumptions. First we said let each horse be a perfect rolling sphere. Then…

For over 3 decades ideas have appeared about how the brain must work from studies of electronic neural nets. These studies usually make a lot of assumptions. First, they assume that the only active cells in the brain are the neurons. Second, the neurons are simple (they have inputs which can be weighted and if the sum of the weighted inputs is over a threshold, the neuron fires its output signals) and there is only one type (or a very, very few different types). Third, the connections between the neurons are only structured in very simple and often statistically driven nets. There is only so much that can be learned about the real brain from this model.

But on the basis of electronic neural nets and information theory with, I believe, only a small input from the physiology of real brains, it became accepted that the brain used a ‘sparse coding’. What does this mean? At one end of a spectrum, the information held in a network depends on the state of just one neuron. This coding is sometimes referred to as grandmother cells because one and only one neuron would code for your grandmother. If the information depends on the state of all the neurons or in other words your grandmother would be coded by a particular pattern of activity that includes the states of all the neurons, that is the other end of the spectrum. Sparse coding uses only a few neurons so is near the grandmother cell end of the spectrum.

Since the 1980s it has generally been accepted that the brain uses sparse coding. But experiments with actual brains have been showing that it may not be the case. A recent paper (Anton Spanne, Henrik Jörntell. Questioning the role of sparse coding in the brain. Trends in Neurosciences, 2015; DOI: 10.1016/j.tins.2015.05.005) argues that it may not be sparse after all.

It was assumed that the brain would use the coding system that gives the lowest total activity without losing functionality. But that is not what the brain actually does. It has higher activity that it theoretically needs. This is probably because the brain sits in a fairly active state even at rest (a sort of knife edge) where it can quickly react to situations.

If sparse coding were to apply, it would entail a series of negative consequences for the brain. The largest and most significant consequence is that the brain would not be able to generalize, but only learn exactly what was happening on a specific occasion. Instead, we think that a large number of connections between our nerve cells are maintained in a state of readiness to be activated, enabling the brain to learn things in a reasonable time when we search for links between various phenomena in the world around us. This capacity to generalize is the most important property for learning.

Here is the abstract:

Highlights

  • Sparse coding is questioned on both theoretical and experimental grounds.
  • Generalization is important to current brain models but is weak under sparse coding.
  • The beneficial properties ascribed to sparse coding can be achieved by alternative means.

Coding principles are central to understanding the organization of brain circuitry. Sparse coding offers several advantages, but a near-consensus has developed that it only has beneficial properties, and these are partially unique to sparse coding. We find that these advantages come at the cost of several trade-offs, with the lower capacity for generalization being especially problematic, and the value of sparse coding as a measure and its experimental support are both questionable. Furthermore, silent synapses and inhibitory interneurons can permit learning speed and memory capacity that was previously ascribed to sparse coding only. Combining these properties without exaggerated sparse coding improves the capacity for generalization and facilitates learning of models of a complex and high-dimensional reality.

Leave a Reply

Your email address will not be published. Required fields are marked *