Category Archives: attention

Ancient Origins – a great book

Ancient Origins – a great book

I have just read a book by Feinberg and Mallatt, The Ancient Origins of Consciousness – How the Brain Created Experience. It may turn out to be one of those classic books that cause a big change in accepted science. They tackle the ‘mystery of consciousness’ in a new way, a very biological way. The book ends with, “a satisfying and complete explanation of primary consciousness requires a confluence of points of view, necessarily including neurobiological, evolutionary, and philosophical arguments, each contributing important answers to the ‘hard question’. Perhaps one reason no one has solved it before is that it requires all three perspectives, including what happened over half a billion years ago.” I assume there will be many who find the book’s theory wanting because of they view neurobiological naturalism is impossible and believe normal science cannot explain consciousness. The authors brand of neurobiological naturalism has three postulates which the book documents:

1. “sensory consciousness can be explained by known neurobiological principles

2. “sensory consciousness is ancient and widespread in the animal kingdom, and diverse neural architectures can create it

3. “the philosophical issues of ontological subjectivity, neuroontological irreducibility, and the ‘hard problem’ can be explained by the nondissociable confluence of neurobiological and adaptive neuroevolutionary events.

The book has changed my ideas in a number of ways. First to fall was my attitude to the idea of ’emergent properties’. I have viewed it as a hedge, a cope-out, and even a way to bring dualism back in disguise. This book describes emergence in a way that makes sense. In a layered hierarchy each layer is created from the layer below but is more complex with novel elements which are labeled as emergent. But the external constraints act primarily on the top layer which constrains the layers beneath it. Thus there is both control and innovation by both bottom-up effects and top-down effects. Yes, this arrangement does need its own name and is a typical situation in living organisms. “In living systems such as the human body, cells constrain their subunits (organelles) to work together, the tissues and organs constrain their cells to cooperate, and the entire body constrains its organs to team up, all to perform the many physiological functions needed for the body to survive. If the constraints were to fail at any level, the body would disassemble and die.” A particular type of layered hierarchy, nested maps of the sensory organs such as the retina, is the basis of consciousness.

My second change of thinking was about the nature of the Cambrian explosion. It had seemed to me that the changes between geological periods were caused by changes to the environment like a meteor strikes which kill off the dominant animals and plants and allowed the others to flourish. But the book makes a case of a change to some animals being the cause and not the result of the abrupt explosion 560 – 520 million years ago. The result was new lines of animals which have populated the earth ever since. Predators appeared for the first time and this resulted in an arms race between predators and prey. There were many adaptations, and among them, improved distance sensing: vision, hearing and smell. Anthropoids and vertebrates in particular evolved high mobility and brains that improved sensory processing. A key change was image forming eyes. These allowed topographical maps of the retina. The other senses in vertebrates (except smell) re-evolved from a new cell line, on the pattern of the eye and its mapping in brain. In the resulting hierarchy of topographical maps for the senses, consciousness evolved.

I had assumed that the source of consciousness was lower in the brain than the cerebrum but was surprised by the location. The book documents it arising first in the optical tectum (superior colliculus in humans) and later extending to the thalamus and cerebrum, 220 years ago in mammals. This move not only added more layers to the existing hierarchies and put the top layer in close proximity to the sense of smell and its related memory in the cerebrum. This was a major advancement for consciousness for mammals and later for birds. Again I had to change my view as I had thought that memory and consciousness were always tightly bound.

The book also traces the evolution of affective consciousness (feelings and emotions), just as old as sensory consciousness. What was news to me was the intermingling of interoceptive bodily senses and affective limbic feelings giving three strains of consciousness.

The authors point out that the experience that the brain creates is embodied, personal, and does not include information about its creation – and therefore is wholly subjective and unique to each being. How this is done, the mechanism, is available to objective investigation. The subjective cannot see the objective and objective cannot see the subjective. There is a gap and it cannot be removed but neurobiological naturalism can ‘bridge’ it. This conclusion was not new to me as I have always been suspicious of whether the ‘hard question’ was really a question at all.

It’s a great book.

 

 

The indirect route

There has been a bit of mystery around how different areas of the cortex initiate shared synchrony or seem to pass information between them.

A paper from a few years back (Poulet, Fernandez, Crochet, Peterson; Thalamic control of cortical states; Nature 2012) showed that the activity of the whiskers of mice affected the state of activity in the cortex. How did the whiskers affect the whole cortex rather than just the whisker sensory area? It was via the thalamus shown by producing the effect without whisker activity by stimulation of thalamus alone.

We investigated the impact of thalamus on ongoing cortical activity in the awake, behaving mouse. We demonstrate that the desynchronized cortical state during active behavior is driven by a centrally generated increase in thalamic action potential firing, which can also be mimicked by optogenetic stimulation of the thalamus. The thalamus therefore is key in controlling cortical states.”

But that was a very general demonstration of thalamic control of large areas of the cortex. What about more specific action? A more recent paper, which uses a wide array of methods, shows a specific case (Wimmer, Schmitt, Davidson, Nakajima, Desseroth, Halassa; Thalamic control of sensory selection in divided attention; Nature 2016).

From their introduction: “Thirty years ago, Francis Crick proposed that the TRN (thalamic reticular nucleus) functions as a searchlight, directing the internal spotlight of attention to thalamo-cortical circuits that process ongoing behavioral demands. Due to technical limitations, this transformative model has been difficult to test, particularly under conditions where the attentional spotlight shifts. Our study combined novel and established technology to provide mechanistic details for Crick’s ‘searchlight hypothesis’. As such, we have taken important step in understanding the circuit mechanisms of sensory selection.”

The object/s of attention can come from bottom-up or top-down processes. In other words they can be triggered by perception or by cognitive and motor demands; triggered by external events or internal tasks. Top-down demands for attention to specific targets appear to originate in the frontal cortex and travel to specific areas of the sensory cortex making them more active. This paper shows that the information travels from the pre-frontal cortex to the appropriate sensory cortex area by way of the thalamus, via the appropriate part of the thalamic reticular nucleus.

Here is their abstract : “How the brain selects appropriate sensory inputs and suppresses distractors is a central unsolved mystery in neuroscience. Given the well-established role of prefrontal cortex (PFC) in executive function, its interactions with sensory cortical areas during attention have been hypothesized to control sensory selection. To test this idea and more generally dissect the circuits underlying sensory selection, we developed a cross-modal divided attention task in mice enabling genetic access to this cognitive process. By optogenetically perturbing PFC function in a temporally- precise window, the ability of mice to appropriately select between conflicting visual and auditory stimuli was diminished. Surprisingly, equivalent sensory thalamo-cortical manipulations showed that behavior was causally dependent on PFC interactions with sensory thalamus, not cortex. Consistent with this notion, we found neurons of the visual thalamic reticular nucleus (visTRN) to exhibit PFC-dependent changes in firing rate predictive of the modality selected. visTRN activity was causal to performance as confirmed via subnetwork-specific bi-directional optogenetic manipulations. Through a combination of electrophysiology and intracellular chloride photometry, we demonstrated that visTRN dynamically controls visual thalamic gain through feedforward inhibition. Combined, our experiments introduce a new subcortical model of sensory selection, where prefrontal cortex biases thalamic reticular subnetworks to control thalamic sensory gain, selecting appropriate inputs for further processing.

It is worth considering the idea that most of the information flow from one part of the cortex to another, where there is no clear, direct nerve tract, is actually traveling by way of the thalamus.

local or not

A recent press release describes a paper ( T. A. Engel, N. A. Steinmetz, M. A. Gieselmann, A. Thiele, T. Moore, K. Boahen. Selective modulation of cortical state during spatial attention. Science, 2016; 354 (6316): 1140 DOI: 10.1126/science.aag1420 ) on the neural activity during awake attention. Here is the abstract:

Neocortical activity is permeated with endogenously generated fluctuations, but how these dynamics affect goal-directed behavior remains a mystery. We found that ensemble neural activity in primate visual cortex spontaneously fluctuated between phases of vigorous (On) and faint (Off) spiking synchronously across cortical layers. These On-Off dynamics, reflecting global changes in cortical state, were also modulated at a local scale during selective attention. Moreover, the momentary phase of local ensemble activity predicted behavioral performance. Our results show that cortical state is controlled locally within a cortical map according to cognitive demands and reveal the impact of these local changes in cortical state on goal-directed behavior.

I find the techniques and the results very interesting. However, I have trouble with the idea that attention has a purely cortical mechanism. Why are the fluctuations in activity said to be endogenously generate? Why is the cortical state controlled locally within a cortical map according to cognitive demands and reveal the impact of these local changes in cortical state on goal-directed behavior? The cortex is not isolated from the rest of the brain. To say some effect is locally generated in the cortex would required showing that the activity level was not affected by the thalamus and associated parts of the brain. The back and forth between cortical columns and the thalamus is the key to cortical function and a requirement for attention, consciousness and wakefulness. This is not a new idea but has been around for a long time. Why does this study not just ignore it, but deny it?

The conclusion to a paper (Sallmann and Kastner, Cognitive and Perceptual Functions of the Visual Thalamus Neuron. 2011 Jul 28; 71(2): 209–223) outlines some signaling between various parts of the thalamus and the cortex.

The overall evidence that has emerged during recent years suggests that the visual thalamus serves a fundamental function in regulating information transmission to the cortex and between cortical areas according to behavioral context. Selective attention and visual awareness have been shown to modulate LGN (thalamus lateral geniculate nucleus) activity, thus indicating that the LGN filters visual information before it reaches the cortex. Behavioral context appears to even more strongly modulate pulvinar activity and, due to its connectivity, the pulvinar (a part of the thalamus) is well-positioned to influence feedforward and feedback information transmission between cortical areas. Because the TRN provides strong inhibitory input to both the LGN and pulvinar, the TRN (thalamic reticular nucleus) may control and coordinate the information transmitted along both retino-cortical and cortico-cortical pathways.

Parasuraman and Davis in Varieties of Attention, page 236, described the networks involved in attention as long ago as 1984.

Three interacting networks mediating different aspects of attention: (1) a posterior attention system comprising parietal cortex, superior colliculus (a midbrain area), and pulvinar(thalamus area) that is concerned was spatial attention; (2) anterior system centered on the anterior cingulate in the medial frontal lobe that mediates target detection and executive control; (3) a vigilance system consisting of the right frontal lobe and brainstem nuclei, principally the noradrenergic locus coerulus (LC).

The brain is a functioning whole not a group of completely independent parts. As the Engel group do not seem to even address the question of involvement of regions of the brain other then the cortex – how can they state that the activity level of a column is locally produced?

 

Beta waves

Judith Copithorne image

Judith Copithorne image

Brain waves are measured for many reasons and they have been linked to various brain activities. But very little is known about how they arise. Are they the result or the cause of the activities they are associated with? How exactly are they produced at a cellular or network level? We know little about these waves.

One type of wave, beta waves (18-25 Hz) are associated with consciousness and alertness. In the motor cortex they are found when muscle contractions are isotonic (contractions that do not produce movement) but are absent just prior and during movement. They are increased during sensory feedback to static motor control and when movement is resisted or voluntarily suppressed. In the frontal cortex the beta waves are found during attention to cognitive tasks directed to the outside world. They are found in alert attentive states, problem solving, judgment, decision making, and concentration. The more involved the cognitive activity the faster the beta waves.

ScienceDaily reports a press release from Brown University on the work of Stephanie Jones and team, who are attempting to understand how beta waves arise. (here) Three types of study are used: MEG recordings, computer models, and implanted electrodes in animals.

The MEG recordings from the somatosensory cortex (sense of touch) and the inferior frontal cortex (higher cognition) showed a very distinct form for the beta waves, “they lasted at most a mere 150 milliseconds and had a characteristic wave shape, featuring a large, steep valley in the middle of the wave.” This wave form was recreated in a computer model of the layers of the cortex. “They found that they could closely replicate the shape of the beta waves in the model by delivering two kinds of excitatory synaptic stimulation to distinct layers in the cortical columns of cells: one that was weak and broad in duration to the lower layers, contacting spiny dendrites on the pyramidal neurons close to the cell body; and another that was stronger and briefer, lasting 50 milliseconds (i.e., one beta period), to the upper layers, contacting dendrites farther away from the cell body. The strong distal drive created the valley in the waveform that determined the beta frequency. Meanwhile they tried to model other hypotheses about how beta waves emerge, but found those unsuccessful.” The model was tested in mice and rhesus monkeys with implanted electrodes and was supported.

Where do the signals come from that drive the pyramidal neurons? The thalamus is a reasonable guess at the source. Thalamo-cortex-thalamus feedback loop makes those very contacts of the thalamus axons within the cortex layers. The thalamus is known to have signals with 50 millisecond duration. All of the sensory and motor information that enters the cortex (except smell) comes though the thalamus. It regulates consciousness, alertness and sleep. It is involved in processing sensory input and voluntary motor control. It has a hand in language and some types of memory.

The team is continuing their study. “With a new biophysical theory of how the waves emerge, the researchers hope the field can now investigate beta rhythms affect or merely reflect behavior and disease. Jones’s team in collaboration with Professor of neuroscience Christopher Moore at Brown is now testing predictions from the theory that beta may decrease sensory or motor information processing functions in the brain. New hypotheses are that the inputs that create beta may also stimulate inhibitory neurons in the top layers of the cortex, or that they may may saturate the activity of the pyramidal neurons, thereby reducing their ability to process information; or that the thalamic bursts that give rise to beta occupy the thalamus to the point where it doesn’t pass information along to the cortex.

It seems very clear that understanding of overall brain function will depend on understanding the events at a cellular/circuit level; and that those processes in the cortex will not be understood without including other regions like the thalamus in the models.

A prediction engine

Judith Copithorne image

Judith Copithorne image

I have just discovered a wonderful source of ideas about the mind, Open MIND (here), a collection of essays and papers edited by Metzinger and Windt. I ran across mention of it in Derek Bownd’s blog (here). The particular paper that Bownd points to is “Embodied Prediction” by Andy Clark.

LibraryClark argues that we look at the mind backwards. The everyday way we view the working of the brain is: the sensory input is used to create a model of the world which prompts a plan of action used to create an action. He argues for the opposite – action forces the nature of sensory input we seek, that sensory input is used to correct an existing model and it is all done by predicting. The mind is a predicting machine; the process is referred to as PP (predictive processing). “Predictive processing plausibly represents the last and most radical step in this retreat from the passive, input-dominated view of the flow of neural processing. According to this emerging class of models, naturally intelligent systems (humans and other animals) do not passively await sensory stimulation. Instead, they are constantly active, trying to predict the streams of sensory stimulation before they arrive.” Rather than the bottom-up flow of sensory information, the theory has a top-down flow of the current model of the world (in effect what the incoming sensory data should look like). All that is feed back upwards is the error corrections where the incoming sensory data is different from what is expected. This seems a faster, more reliable, more efficient system than the one in the more conventional theory. The only effort needed is to deal with the surprises in the incoming data. Prediction errors are the only sensory information that is yet to be explained, the only place where the work of perception is required for most of the time.

Clark doesn’t make much of it, but he has a neat way of understanding attention. Much of our eye movements and posture movements are seen as ways of selecting the nature of the next sensory input. “Action is not so much a response to an input as a neat and efficient way of selecting the next “input”, and thereby driving a rolling cycle.” As the brain seeks certain information (because of uncertainty, the task at hand, or other reasons), it will work harder to solve the error corrections pertaining to that particular information. Action will be driven towards examining the source of that information. Unimportant and small error corrections may be ignored if they are not important to current tasks. This looks like an excellent description of the focus of attention to me.

Conceptually, this implies a striking reversal, in that the driving sensory signal is really just providing corrective feedback on the emerging top-down predictions. As ever-active prediction engines, these kinds of minds are not, fundamentally, in the business of solving puzzles given to them as inputs. Rather, they are in the business of keeping us one step ahead of the game, poised to act and actively eliciting the sensory flows that keep us viable and fulfilled. If this is on track, then just about every aspect of the passive forward-flowing model is false. We are not passive cognitive couch potatoes so much as proactive predictavores, forever trying to stay one step ahead of the incoming waves of sensory stimulation.

The prediction process is also postulated for motor control. We predict the sensory input which will happen during an action and that information flows from top down and error correction controls the accuracy of the movement. The predicted sensory consequences of our actions causes the actions. “The perceptual and motor systems should not be regarded as separate but instead as a single active inference machine that tries to predict its sensory input in all domains: visual, auditory, somatosensory, interoceptive and, in the case of the motor system, proprioceptive. …This erases any fundamental computational line between perception and the control of action. There remains, to be sure, an obvious (and important) difference in direction of fit. Perception here matches neural hypotheses to sensory inputs, and involves “predicting the present”; while action brings unfolding proprioceptive inputs into line with neural predictions. …Perception and action here follow the same basic logic and are implemented using the same computational strategy. In each case, the systemic imperative remains the same: the reduction of ongoing prediction error.

This theory is comfortable when I think of conversational language. Unlike much of perception and control of movement, language is conducted more in the light of conscious awareness. It is (almost) possible to have a feel of a prediction of what is going to be said when listening and to only have work to do in understanding when there is a surprise mismatch between the expected and the heard word. And when talking, it is without much effort until your tongue makes a slip and has to be corrected.

I am looking forward to browsing through openMIND now that I know it exists.

 

Shared attention

Social interaction or communication requires the sharing of attention. If two people are not paying attention to one another then there is no interaction and no communication. Shared attention is essential for a child’s development of social cognition and communication skills. Two types of shared attention have been identified: mutual gaze when two people face one another and attend to each others eyes; and joint attention when two people look at a third person or object. Joint attention is not the same for both individuals because one initiates it and the other responds.

In a recent paper, researchers studied shared attention (Takahiko Koike etal; Neural substrates of shared attention as social memory: A hyperscanning functional magnetic resonance imaging study ; NeuroImage 125 (2016) 401–412). This cannot be done on an individual level as it involves social exchange and so the researchers used fMRI hyperscanning. Real time video recording and projection allowed two individuals in separate scanners to communicate through facial expression and eye movements while they were both being scanned. Previous studies had shown neural synchronization during shared attention and synchronization of eye blinks. They found that it was the task of establishing joint attention which requires sharing an attentional temporal window that task creates the blink synchrony. This synchrony is remembered in a pair specific way in social memory.

Mutual gaze is needed to give mutual attention – and that is needed to initiate joint attention which requires a certain synchrony – and finally that synchronizing results in a specific memory of the pair’s joint attention which allows further synchrony during subsequent mutual gaze without joint attention first.

Here is their abstract: “During a dyadic social interaction, two individuals can share visual attention through gaze, directed to each other (mutual gaze) or to a third person or an object (joint attention). Shared attention is fundamental to dyadic face- to-face interaction, but how attention is shared, retained, and neutrally represented in a pair-specific manner has not been well studied. Here, we conducted a two-day hyperscanning functional magnetic resonance imaging study in which pairs of participants performed a real-time mutual gaze task followed by a joint attention task on the first day, and mutual gaze tasks several days later. The joint attention task enhanced eye-blink synchronization, which is believed to be a behavioral index of shared attention. When the same participant pairs underwent mutual gaze without joint attention on the second day, enhanced eye-blink synchronization persisted, and this was positively correlated with inter-individual neural synchronization within the right inferior frontal gyrus. Neural synchronization was also positively correlated with enhanced eye-blink synchronization during the previous joint attention task session. Consistent with the Hebbian association hypothesis, the right inferior frontal gyrus had been activated both by initiating and responding to joint attention. These results indicate that shared attention is represented and retained by pair-specific neural synchronization that cannot be reduced to the individual level.

The right inferior gyrus (rightIFG) region of the brain has been linked in other research with: interfacing between self and other; unconscious incorporation of facial expression in self and others; the release from mutual attention; and, neural synchronization during social encounters. The rightIFG is active in both initiating and responding to joint attention and in the synchrony during mutual gaze (when it is present). However it is unlikely to cause blinking directly. “Neural synchronization of the right IFG represents learned shared attention. Considering that shared attention is to be understood as a complementary action due to its social salience, relevance in initiating communication, and joint action, the present finding is consistent with a previous study by Newman-Norlund et al. who showed that the right IFG is more active during complimentary as compared to imitative actions.” Communication, communication, communication!

This fits with the theory that words steer joint attention to things present or absent, concrete or abstract in a way that is similar to the eyes steering joint attention on concrete and present things. Language has harnessed the brain’s mechanisms for joint attention if this theory is correct (I think it is).

 

The brain’s gateway

There have been a few papers lately on the function of the thalamic reticular nucleus (TRN) that characterize it as a filter, a sieve, and a switchboard. The citations and abstracts of 4 of these papers are below. Francis Crick suggested this function for the TRN many years ago but it was not possible until recently to demonstrate it because of the anatomy of the TRN.

The thalamus sits at the center of the brain and is connected to the brain stem and spinal cord below, the cerebral hemispheres above and the basal ganglia to the sides. The thalamus is part of almost all the functional processing loops in the brain. In particular, almost all sensory information enters the cortex from the thalamus, and every corner of the cortex sends signals back to the thalamus. When this traffic, the thalamo-cerebral loops, shut down, so does consciousness.

The TRN is a thin layer of neurons that almost entirely covers the thalamus. Because it is so thin and so deep in the brain, it has been difficult to study. New methods have overcome some of these problems.

In effect all the traffic between the cortex and the thalamus is carried by axons that pass through the TRN and the axons have little branches that make contact with TRN neurons. In other words the TRN gets a smell of all the passing signals – it does not interfere with the axons but just spies on them. The TRN neurons are inhibitory, so when a passing signals activates one of them, it will suppress the neuron in the thalamus that is sending or receiving the signal. This action keeps most activity at a low level. During sleep the thalamo-cerebral loops are effectively turned off and sensory information does not reach the cortex. During attention (and multitasking) the TRN reduces distracting signals but not the attended ones. It also seems to control the type of sleep by controlling types of brain waves in the cortex during sleep. The executive functions of the prefrontal cortex seems to act through the TRN rather than directly on areas of the cortex, to control attention (steer the spotlight of attention).

Here are the abstracts and citations:

Sandra Ahrens, Santiago Jaramillo, Kai Yu, Sanchari Ghosh, Ga-Ram Hwang, Raehum Paik, Cary Lai, Miao He, Z Josh Huang, Bo Li. ErbB4 regulation of a thalamic reticular nucleus circuit for sensory selection. Nature Neuroscience, 2014; DOI: 10.1038/nn.3897

Selective processing of behaviorally relevant sensory inputs against irrelevant ones is a fundamental cognitive function whose impairment has been implicated in major psychiatric disorders. It is known that the thalamic reticular nucleus (TRN) gates sensory information en route to the cortex, but the underlying mechanisms remain unclear. Here we show in mice that deficiency of the Erbb4 gene in somatostatin-expressing TRN neurons markedly alters behaviors that are dependent on sensory selection. Whereas the performance of the Erbb4-deficient mice in identifying targets from distractors was improved, their ability to switch attention between conflicting sensory cues was impaired. These behavioral changes were mediated by an enhanced cortical drive onto the TRN that promotes the TRN-mediated cortical feedback inhibition of thalamic neurons. Our results uncover a previously unknown role of ErbB4 in regulating cortico-TRN-thalamic circuit function. We propose that ErbB4 sets the sensitivity of the TRN to cortical inputs at levels that can support sensory selection while allowing behavioral flexibility.

Ralf D. Wimmer, L. Ian Schmitt, Thomas J. Davidson, Miho Nakajima, Karl Deisseroth, Michael M. Halassa. Thalamic control of sensory selection in divided attention. Nature, 2015; DOI: 10.1038/nature15398

How the brain selects appropriate sensory inputs and suppresses distractors is unknown. Given the well-established role of the prefrontal cortex (PFC) in executive function, its interactions with sensory cortical areas during attention have been hypothesized to control sensory selection. To test this idea and, more generally, dissect the circuits underlying sensory selection, we developed a cross-modal divided-attention task in mice that allowed genetic access to this cognitive process. By optogenetically perturbing PFC function in a temporally precise window, the ability of mice to select appropriately between conflicting visual and auditory stimuli was diminished. Equivalent sensory thalamocortical manipulations showed that behaviour was causally dependent on PFC interactions with the sensory thalamus, not sensory cortex. Consistent with this notion, we found neurons of the visual thalamic reticular nucleus (visTRN) to exhibit PFC-dependent changes in firing rate predictive of the modality selected. visTRN activity was causal to performance as confirmed by bidirectional optogenetic manipulations of this subnetwork. Using a combination of electrophysiology and intracellular chloride photometry, we demonstrated that visTRN dynamically controls visual thalamic gain through feedforward inhibition. Our experiments introduce a new subcortical model of sensory selection, in which the PFC biases thalamic reticular subnetworks to control thalamic sensory gain, selecting appropriate inputs for further processing.

Laura D Lewis, Jakob Voigts, Francisco J Flores, Lukas I Schmitt, Matthew A Wilson, Michael M Halassa, Emery N Brown. Thalamic reticular nucleus induces fast and local modulation of arousal state. eLife, October 2015 DOI: 10.7554/eLife.08760

During low arousal states such as drowsiness and sleep, cortical neurons exhibit rhythmic slow wave activity associated with periods of neuronal silence. Slow waves are locally regulated, and local slow wave dynamics are important for memory, cognition, and behaviour. While several brainstem structures for controlling global sleep states have now been well characterized, a mechanism underlying fast and local modulation of cortical slow waves has not been identified. Here, using optogenetics and whole cortex electrophysiology, we show that local tonic activation of thalamic reticular nucleus (TRN) rapidly induces slow wave activity in a spatially restricted region of cortex. These slow waves resemble those seen in sleep, as cortical units undergo periods of silence phase-locked to the slow wave. Furthermore, animals exhibit behavioural changes consistent with a decrease in arousal state during TRN stimulation. We conclude that TRN can induce rapid modulation of local cortical state.

Michael M. Halassa, Zhe Chen, Ralf D. Wimmer, Philip M. Brunetti, Shengli Zhao, Basilis Zikopoulos, Fan Wang, Emery N. Brown, Matthew A. Wilson. State-Dependent Architecture of Thalamic Reticular Subnetworks. Cell, 2014; 158 (4): 808 DOI: 10.1016/j.cell.2014.06.025

Behavioral state is known to influence interactions between thalamus and cortex, which are important for sensation, action, and cognition. The thalamic reticular nucleus (TRN) is hypothesized to regulate thalamo-cortical interactions, but the underlying functional architecture of this process and its state dependence are unknown. By combining the first TRN ensemble recording with psychophysics and connectivity-based optogenetic tagging, we found reticular circuits to be composed of distinct subnetworks. While activity of limbic-projecting TRN neurons positively correlates with arousal, sensory-projecting neurons participate in spindles and show elevated synchrony by slow waves during sleep. Sensory-projecting neurons are suppressed by attentional states, demonstrating that their gating of thalamo-cortical interactions is matched to behavioral state. Bidirectional manipulation of attentional performance was achieved through subnetwork-specific optogenetic stimulation. Together, our findings provide evidence for differential inhibition of thalamic nuclei across brain states, where the TRN separately controls external sensory and internal limbic processing facilitating normal cognitive function.

Attention on attention

A recent paper does a magnificent job of marshaling many sources of information on attention and developing a theory to fit those pieces of research. (Timothy J. Buschman, Sabine Kastner. From Behavior to Neural Dynamics: An Integrated Theory of Attention. Neuron, 2015; 88 (1): 127 DOI: 10.1016/j.neuron.2015.09.017). “The brain has a limited capacity and therefore needs mechanisms to selectively enhance the information most relevant to one’s current behavior. We refer to these mechanisms as ‘‘attention.’’ Attention acts by increasing the strength of selected neural representations and preferentially routing them through the brain’s large-scale network. This is a critical component of cognition and therefore has been a central topic in cognitive neuroscience. Here we review a diverse literature that has studied attention at the level of behavior, networks, circuits, and neurons. We then integrate these disparate results into a unified theory of attention.

They concentrate on visual attention because there has been most research in that area. Recent work has pointed to the visual cortex creating a ‘dictionary’ of objects and object features through learning. The learning process captures the regularities of the world and visual representations are coded in this ‘dictionary’. “Importantly, embedding object-based representations will ensure that the system is tolerant to noise as any input will be transformed by the learned object dictionary: signals that match an expected pattern will be boosted, while signals that are orthogonal to representations in the dictionary will be ignored. As the dictionary has been trained to optimally represent the world, this means the system will, in effect, perform pattern completion, settling on nearby ‘‘known’’ representations, even when provided with a noisy input.” These representations are what top-down and bottom-up attention controls act on.

Their theory proposes a cascade and its regular reset.

(1) Attention can either be (a) automatically grabbed by salient stimuli or (b) guided by task representations in frontal and parietal regions to specific spatial locations or features.

(2) The pattern-completion nature of sensory cortex sharpens the broad top-down attentional bias, restricting it to perceptually relevant representations. Interactions with bottom-up sensory drive will emphasize specific objects.

(3) Interneuron-mediated lateral inhibition normalizes activity and, thus, suppresses competing stimuli. This results in increased sensitivity and decreased noise correlations.

(4) Lateral inhibition also leads to the generation of high-frequency synchronous oscillations within a cortical region. Inter-areal synchronization follows as these local oscillations synchronize along with the propagation of a bottom-up sensory drive. Both forms of synchrony act to further boost selected representations.

(5) Further buildup of inhibition acts to ‘‘reset’’ the network, thereby restarting the process. This reset allows the network to avoid being captured by a single stimulus and allows a positive-only selection mechanism to move over time.

qttention pic

The center of the universe

When we are conscious we look out at the world through a large hole in our heads between our noses and our foreheads, or so it seems. It is possible to pin-point the exact place inside our heads which is the ‘here’ to which everything is referenced. That spot is about 4-5 centimeters behind the bridge of the nose. Not only sight but hearing, touch and the feelings from inside our bodies are some distance in some direction from that spot. As far as we are concerned, we carry the center of the universe around in our heads.

Both our sensory system and our motor system use this particular three dimensional arrangement centered on that particular spot and so locations are the same for both processes. How, why and where in the brain is this first person, ego-centric space produced? Bjorn Merker has a paper in a special topic issue of Frontiers of Psychology, Consciousness and Action Control (here). The paper is entitled “The efference cascade, consciousness and its self: naturalizing the first person pivot of action control”. He believes evidence points to the roof of the mid-brain, the superior colliculus.

If we consider the center of our space, then attention is like a light or arrow pointing from the center to a particular location in that space and what is in it. That means that we are oriented in that direction. “The canonical form of this re-orienting is the swift and seamlessly integrated joint action of eyes, ears (in many animals), head, and postural adjustments that make up what its pioneering students called the orienting reflex.

This orientation has to occur before any action directed at the target or any examination of the point of interest by our senses. First the orientation and then the focus of attention. But how does the brain decide which possible focus of attention is the one to orient towards. “The superior colliculus provides a comprehensive mutual interface for brain systems carrying information relevant to defining the location of high priority targets for immediate re-orienting of receptor surfaces, there to settle their several bids for such a priority location by mutual competition and synergy, resulting in a single momentarily prevailing priority location subject to immediate implementation by deflecting behavioral or attentional orientation to that location. The key collicular function, according to this conception, is the selection, on a background of current state and motive variables, of a single target location for orienting in the face of concurrent alternative bids. Selection of the spatial target for the next orienting movement is not a matter of sensory locations alone, but requires access to situational, motivational, state, and context information determining behavioral priorities. It combines, in other words, bottom-up “salience” with top-down “relevance.”

We are provided with the illusion that we sit behind our eyes and experience the world from there and from there we plan and direct our actions. A lot of work and geometry that we are unaware of goes into this illusion. It allows us to integrate what we sense with what we do, quickly and accurately.

 

A new way to parse language

For many years I have followed EB Bolles’ blog Babel’s Dawn (here) while he discussed the origin of human language. He has convinced me of many things about the history and nature of language. And they fit with how I thought of language. Now he has written a chapter in a book, “Attention and Meaning: The Attentional Basis of Meaning”. In his chapter, “Attentional-Based Syntax” (here), Bolles re-writes the mechanics of parsing phrases and sentences. He uses new entities, not nouns and verbs etc., and very different rules.

The reason I like this approach so much is the same reasons that I cannot accept Chomsky’s view of language. I see language from a biological point of view, a product of genetic and cultural evolution, and continuous with the communication of other animals. It is a type of biological communication. I imagine (rightly or wrongly) that Chomsky finds biology and especially animals distasteful and that he also has no feel for the way evolution works. I on the other hand, find a study of language that seems to only deal with complete written sentences on a white board, not of much interest. Instead of a form of biological communication, Chomsky gives us a form of logical thought.

Bolles summarizes his chapter like this. “The commonsense understanding of meaning as reference has dominated grammatical thought for thousands of years, producing many paradoxes while leaving many mysteries about language’s nature. The paradoxes wane if we assume that meaning comes by directing attention from one phenomenon to another. This transfer of meaning from objective reality to subjective experience breaks with the objective grammatical accounts produced by many philosophers, lexicographers, and teachers through the ages. The bulk of this paper introduces a formal system for parsing sentences according to an attention- based syntax. The effort proves surprisingly fruitful and is capable of parsing many sentences without reference to predicates, nouns or verbs. It might seem a futile endeavor, producing an alternative to a system used by every educated person in the world, but the approach explains many observations left unexplained by classical syntax. It also suggests a promising approach to teaching language usage. ”

The key change of concept is that words do not have meanings, nor do they carry meaning from a speaker to a listener – instead, they pilot attention within the brain. Or in other words they work by forcing items into working memory and therefore attention (or attention and therefore working memory). This makes very good sense. Take a simple word like ‘tree’: speaker says ‘tree’, listener hears ‘tree’ and memory automatically brings to the surface memories associated with ‘tree’. The word ‘tree’ is held in working memory and as long as it is there, the brain has recall or near recall of tree-ish concepts/ images/ ideas. The meaning of tree is found within the listener’s brain. No one thing, word or single element of memory has meaning; the meaning is formed when multiple things form a connection. It is the connections that gives meaning. I like this because I have thought for years that single words are without meaning. But words form a network of connections in any culture and a word’s connections in the network is what defines the word. Because we share cultural networks including a language, we can communicate. I also like this starting point because it explains why language is associated with consciousness (an oddity because very little else to do with thinking is so closely tied to consciousness). Consciousness is associated with working memory and attention, and the content of consciousness seems to be (or come from) the focus of attention in working memory.

Bolles uses a particular vocabulary in his parsing method: phenomenon is any conscious experience, sensation is a minimal awareness like a hue or tone, percept is a group of sensations like a loud noise, bound perception is a group of percepts that form a unified experience. We could also say phenomenon is another word for subjective consciousness. Then we have the process of perception. Perception starts with primary sensory input, memory and predictions. It proceeds to bind elements together to form a moment of perception, then serial momentary perceptions are bound into events. It matters little what words are used, the process is fairly well accepted. But what is more, it is not confined to how language is processed – it is how everything that passes through working memory and into the content of consciousness is processed. No magic here! No mutation required! Language uses what the brain more-or-less does naturally.

This also makes the evolution of language easier to visualize. The basic mechanism existed in the way that attention, working memory and consciousness works. It was harnessed by a communication function and that function drove the evolution of language: both biological evolution and a great deal of cultural evolution. This evolution could be slow and steady over a long period of time and does not have to be the result of a recent (only 50-150 thousand years ago) all powerful single mutation.

So – the new method of parsing is essentially to formulate the rules that English uses to bind focuses of attention together to make a meaningful event (or bound perception). Each language would have its own syntax rules. The old syntax rules and the new ones are similar because they are both describing English. But… it is not arbitrary rules any more but understandable rules in the context of working memory and attention. Gone is the feeling of memorizing rules to parse sentences on a white board. Instead is an understanding of English as it is used.

I have to stick in a little rant here about peeves. If someone can understand without effort or mistake what someone else has said then what is the problem? Why are arbitrary rules important if breaking them does not interfere at all with communication? With the new parsing method, it is easy to see what is good communication and what isn’t; it is clear what will hinder communication. The method can be used to improve perfectly good English into even better English. Another advantage is that the method can be used for narratives longer than a sentence.

I hope that this approach to syntax will be taken up by others.