Category Archives: computer

I have a point of view – here is a look at it

I have recently been in a conversation with a computer expert and I notice again how different my ideas are from those of the computer community. There are differences in starting points, methods of proof, definitions, and goals to mention a few.

 

Starting point: I start dealing with the notion of thought at the point in evolution when animals start to actively move. In order to move, an animal must know (at some level) where it is, where it wants to go, and how to get there. This seems to be the first time in evolution that a nervous system was needed (or occurred), although many of the building blocks existed with other functions. The animal needed motor neurons to do the moving and sensory neurons to monitor the movement. Now that the ball was rolling, evolution would result in ever more sophistication: habituation, learning, memory, planning etc. These complexities demand interneurons between the motor and sensory neurons. The interneuron web would then become larger and more complex, ending up as a brain capable of thought. The nervous systems of different types of organisms may seem very different, but in essence they are very similar: a motor side and a sensory side with a thinking web between them.

 

Now the common approach is to put the concept of thinking in a basket with logic, problem solving, symbol manipulation, and the like. I put thinking in a basket with movement processes, modeling the world, intention, decision making, and the like.

 

One of the common examples of this difference is the ‘catching a fly ball problem’. One way to do this is to develop an algorithm to calculate the path of the ball and therefore where it can be intercepted, the fielder runs to that spot and catches the ball. The thing that is wrong with this is that the fielder does not have enough information to do the calculation nor the time. However the fielder can simply monitor the bearing and height of the ball and run with directions and speeds that keep the ball at the same spot of his visual field. If he can successfully do this he will end up intercepting the ball. This sort of brain process is more like a servo system than a calculation. The oddly curved path that fielders run shows that this is indeed how they catch the fly ball.

 

Thinking is probably done many ways and they probably tend to be task type specific because the brain was not built to be a general computing device; it evolved as a survival kit.

 

Methods of proof: History is littered with logical proofs of ideas that turned out to be false. There are two difficulties with logic. First it cannot disprove anything: we take agreed axioms, do impeccable logic to get a conclusion. Does this prove that an alternative conclusion is wrong? Well, we have a choice – either our alternative conclusion is wrong or one of our assumed axioms is wrong – take your pick.

 

The second problem is that logic is not impeccable if it is done in natural language about actual real entities. Words are too slippery and they can shift their meaning with the context. So for instance, the word ‘death’ can be a noun for a state of being (not being) or a noun for the event of entering that state. It can change back and forth in a logical verbal argument.

 

I prefer the observation of reality to logic. That is, I am an empiricists rather than a rationalist. Now I know that any experiment or observation can be mistaken, and is faulty more often than logical arguments, perhaps. However, the experiments/observations do not purport to be individually correct. It is the weight of evidence that is convincing. I follow science as opposed to philosophy, and I follow it in a critical frame of mind.

 

So people can argue using logic, that what they know about conventional computers (digital, algorithmic) also applies to the brain, but I want to see each element of the mapping between computers and brains observed and tested experimentally. Brains are only metaphorically like computers – if they are more than that it has to be demonstrated. The experimental evidence for brains being general computing devices, digital and algorithmic, is extremely scant at best.

 

Definitions: Different fields of study have various different definitions for the same word. A couple of years ago I was reading a paper without any knowledge of authors or their field and I was finding it very unusual and surprising. I kept thinking that they should say what their evidence was for what they were saying – they were implying models and theories that were not (or not yet) accepted. Finally I gave up and looked at some of the papers they referenced. They turned out to be all about computer science. This was not a paper about the brain at all but about computer simulations of the brain. It used the word neuron without any explanation that this ‘neuron’ was not a neuron at all but a little electronic circuit. I accept that words are used metaphorically and that is useful. But in this case a little hint would have helped. I re-read the paper and got the idea that the authors had been taken in by their re-definition of neuron and actually believed that they were saying something that was applicable to cellular networks in the brain because they were seeing it in electronic networks of e-neurons.

 

Understanding is not about semantics. Each word or standard phrase can have many meanings and a load of implied baggage from old theories. Words are useful for communication but cannot be taken too seriously. What is important is the understanding, not its verbal expression. This is why I have more faith in science than in many other fields – scientists are in contact with their reality and they can see, hear, smell, feel, prod and manipulate the reality they are studying.

 

Jim Andrews, in a blog post (http://netartery.vispo.com/?p=1174), says, “Oh, by the way, here is a mathematization of all conceivable machines – here is the universal machine, the machine that can compute anything that any conceivable machine can compute.” This uses an almost tautological definition of machine. The concept of machines appears to be only those that can be mathematized and can compute. So it follows that any (such) machine is able to compute things and the universal machine (which is a mathematical creation) can match them. This says nothing about anything that is not a mathematizable, computing device. Machine has many other definitions too. For example, the classic simple machines are 6 in number: lever, wheel, pulley, inclined plane, wedge and screw. None of these is a mathematized computing machine. And computing has lost much of its general meaning and become married to the computer. So many people do not use it for the processes of the brain and call those processes cognition instead.

 

Playing with semantics is not what interests to me. I do not confuse words with the real things they stand in for.

 

Goals: People who study brains have many different goals. Some want to prove or disprove some theory. Some want to identify what (exactly) is the unique difference between man and other animals that makes man uniquely unique. Some want to cure mental disabilities and illnesses. Some just want to understand how the brain works in the same way as any other organ works (the heart or the stomach say). Some want to understand themselves. I find the complexity of the brain intriguing and just want to understand it. I also am interested in cultural/scientific shocks. The Copernician revolution for example, or the Darwinian revolution were awesome events and we are entering another which does not have a name yet but will be just as big an event. It has only just started but every month there are surprising new discoveries. I want to follow the progress.

 

 

So that is my viewpoint: biological, evidential/empirical, avoiding semantic traps, open-minded but critical, watching the science unfold.

 

An electronic synapse

It is a truism that simulation of the brain with ‘electronic neurons’ has been very approximate at best. One problem was simulating the behavior of synapses in software – and synapses are key to ‘biological neuron’ communication. But soon there may be a simulation of the synapse in hardware. This does not solve all the problems with simulating the brain but it will be a large step in that direction. This is somewhat like the brain’s architecture which is more physically based than algorithmically based. Of course, miniaturization is necessary as the number of synapses in any smallish part of the brain is astronomical.

 

The idea is to use a very thin sheet of samarium nickelate between two platinum terminals. The sheet can be changed from isolating to conducting by the concentration of oxygen ions in the sheet. The oxygen ions can be made to leak out or in - from a small reservoir of ionic liquid by applied voltages. The voltage is controlled by the strength and timing of spikes on the ‘dentrite’ and ‘axon’ terminals. The changes in conductivity are stable until forced to change by another voltage signal. The devices can therefore ‘learn’/’remember’.

 

They have the advantages that they: can be integrated into silicon-based circuits, are fast, can work at room temperature, are energy efficient, do not require continuous power to maintain their ‘learning’.

 

Here is the citation and abstract:

 

Jian Shi, Sieu D. Ha, You Zhou, Frank Schoofs, Shriram Ramanathan. A correlated nickelate synaptic transistor. Nature Communications, 2013

 

Inspired by biological neural systems, neuromorphic devices may open up new computing paradigms to explore cognition, learning and limits of parallel computation. Here we report the demonstration of a synaptic transistor with SmNiO3, a correlated electron system with insulator–metal transition temperature at 130°C in bulk form. Non-volatile resistance and synaptic multilevel analogue states are demonstrated by control over composition in ionic liquid-gated devices on silicon platforms. The extent of the resistance modulation can be dramatically controlled by the film microstructure. By simulating the time difference between postneuron and preneuron spikes as the input parameter of a gate bias voltage pulse, synaptic spike-timing-dependent plasticity learning behaviour is realized. The extreme sensitivity of electrical properties to defects in correlated oxides may make them a particularly suitable class of materials to realize artificial biological circuits that can be operated at and above room temperature and seamlessly integrated into conventional electronic circuits.

 

 

Questioning the brain-computer metaphor

I have noted before that the brain does not do algorithms in the sense that computers do. We do not compute in a step-wise fashion, unless we are doing something consciously in a step-wise fashion. Take an example: to find if a number is even, first find the last digit, find if that digit is zero or is divisible by 2 without remainder, if so it is even and if not it is odd. If we consciously do this task stepwise, then those are the steps we will note consciously. 798 has 8 as its last digit and 8 is divisible by 2 therefore 798 is even. But of course we rarely go though the steps consciously, we look at the number and say whether it is odd or even. And we assume that we have unconsciously followed the appropriate steps. We have no proof that we have followed the steps – in fact we have no idea how we got the answer. There is no reason to assume that unconsciously we are following an algorithm.

 

 

Here is the abstract from a paper: Gary Lupyan; The difficulties of executing simple algorithms: Why brains make mistakes computers don’t; Cognition, 2013; 129 (3): 615.

 

It is shown that educated adults routinely make errors in placing stimuli into familiar, well-defined categories such as triangle and odd number. Scalene triangles are often rejected as instances of triangles and 798 is categorized by some as an odd number. These patterns are observed both in timed and untimed tasks, hold for people who can fully express the necessary and sufficient conditions for category membership, and for individuals with varying levels of education. A sizeable minority of people believe that 400 is more even than 798 and that an equilateral triangle is the most “trianglest” of triangles. Such beliefs predict how people instantiate other categories with necessary and sufficient conditions, e.g., grandmother. I argue that the distributed and graded nature of mental representations means that human algorithms, unlike conventional computer algorithms, only approximate rule-based classification and never fully abstract from the specifics of the input. This input-sensitivity is critical to obtaining the kind of cognitive flexibility at which humans excel, but comes at the cost of generally poor abilities to perform context-free computations. If human algorithms cannot be trusted to produce unfuzzy representations of odd numbers, triangles, and grandmothers, the idea that they can be trusted to do the heavy lifting of moment-to-moment cognition that is inherent in the metaphor of mind as digital computer still common in cognitive science, needs to be seriously reconsidered.”

 

 

The last sentence is particularly important. “… the metaphor of mind as digital computer still common in cognitive science, needs to be seriously reconsidered.”