I have noted before that the brain does not do algorithms in the sense that computers do. We do not compute in a step-wise fashion, unless we are doing something consciously in a step-wise fashion. Take an example: to find if a number is even, first find the last digit, find if that digit is zero or is divisible by 2 without remainder, if so it is even and if not it is odd. If we consciously do this task stepwise, then those are the steps we will note consciously. 798 has 8 as its last digit and 8 is divisible by 2 therefore 798 is even. But of course we rarely go though the steps consciously, we look at the number and say whether it is odd or even. And we assume that we have unconsciously followed the appropriate steps. We have no proof that we have followed the steps – in fact we have no idea how we got the answer. There is no reason to assume that unconsciously we are following an algorithm.

Here is the abstract from a paper: Gary Lupyan; **The difficulties of executing simple algorithms: Why brains make mistakes computers don’t;** *Cognition*, 2013; 129 (3): 615.

“*It is shown that educated adults routinely make errors in placing stimuli into familiar, well-defined categories such as triangle and odd number. Scalene triangles are often rejected as instances of triangles and 798 is categorized by some as an odd number. These patterns are observed both in timed and untimed tasks, hold for people who can fully express the necessary and sufficient conditions for category membership, and for individuals with varying levels of education. A sizeable minority of people believe that 400 is more even than 798 and that an equilateral triangle is the most “trianglest” of triangles. Such beliefs predict how people instantiate other categories with necessary and sufficient conditions, e.g., grandmother. I argue that the distributed and graded nature of mental representations means that human algorithms, unlike conventional computer algorithms, only approximate rule-based classification and never fully abstract from the specifics of the input. This input-sensitivity is critical to obtaining the kind of cognitive flexibility at which humans excel, but comes at the cost of generally poor abilities to perform context-free computations. If human algorithms cannot be trusted to produce unfuzzy representations of odd numbers, triangles, and grandmothers, the idea that they can be trusted to do the heavy lifting of moment-to-moment cognition that is inherent in the metaphor of mind as digital computer still common in cognitive science, needs to be seriously reconsidered.”*

The last sentence is particularly important. “… the metaphor of mind as digital computer still common in cognitive science, needs to be seriously reconsidered.”