Where social interaction meets spatial information

 Positive social interactions are often rewarding to humans. It is hypothesized that successful social interactions involve the same reward processing circuitry that underlies the pleasure we derive from food or money. Studies in rodents show that social interactions result in the release of dopamine, the neurotransmitter often associated with pleasure, in the nucleus accumbens (NAc), a small region buried deep in the brain (Fig. 1). An interesting observation from rodent studies is that mice often return to the same area they previously met another mouse. Before the advent of cell phones and Facebook (and perhaps even now), we probably did (do) the same, too. If you met interesting people at a bar, you are probably more likely to go that same bar again. But what are the neural circuits that make the association between social interaction and spatial location?

Read More

A map of whatever

The hippocampus is perhaps the most well-known brain region among neuroscientists, not only for its beautiful name (Latin for seahorse), but also for its critical role in learning and memory. Decades ago, another landmark discovery showed that hippocampal neurons seem to encode space1. That is, individual hippocampal neurons fire only when an animal moves into a specific spot of its current room.

Read More

With every breath

Most of us do not spend much time thinking about breathing (now you are :D). This is because our autonomic nervous system hides its control under our consciousness. But, breathing is not as effortless as it seems. For one, air pressure and oxygen level can change from time to time. Also, diseases such as the common flu often disturb the flow of our airways. For reasons like these, we often have to modulate the rate and depth of our breaths. Therefore, a pre-programmed breathing rhythm does not suffice –breathing also requires constant monitoring and feedback.

Read More

Sleeping on the Wing

There is an old Monty Python skit where John Cleese and Graham Chapman play airplane pilots. Presumably on a long, tedious flight, they are clearly bored and keen on amusing themselves at the expense of their passengers.

They find entertainment through relaying worrisome, nonsensical messages. Cleese begins their prank with the truism, "Hello, this is your captain speaking. There is absolutely no cause for alarm." And after some internal discussion about what there should be no cause for alarm about, they add: "The wings are not on fire." The messages get more ridiculous, and hilarity (at least for the pilots) ensues.

Read More

Thinking Fast and Slow about Thirst

Out of all motivational states, thirst should have been a simple one to understand. One feels thirsty when one is dehydrated, which can be detected from blood volume and osmolarity. Drinking water hydrates one’s body and quenches thirst. This is a homeostatic model. Intuitive, right? Well, the strange thing about thirst is that it is quenched within seconds to minutes after drinking water, which is too fast for any changes in the blood to happen. This is as if the brain gets hydrated before the body, which makes little sense since there is no specialized canal that passes water from mouth to brain (thank goodness). On the other hand, the buildup of the thirst drive is usually rather slow, meaning that thirst state can change on both a fast and slow time scale. How does it work?

Read More

The touch of a fly

Our sense of touch has an innate connection with our emotions. Gentle touches are soothing for not only us but also other animals. For example, classic experiments by psychologist Harry Harlow in the 1950s found that an infant monkey raised with two robots, one providing food and the other wearing soft cloth, spends more time cuddling with the cloth robot1. When scared, the infant monkey also goes to the cloth robot for protection. Clearly, there is a special pathway that guides touch sensation to the depths of animal instincts. Working out this pathway requires knowledge about the neural circuitry processing touch sensation.

Read More

How falling off a horse led to discovering the opiate receptor

“Any way you can make love, somebody’s already thought of. Any crazy caper you can get up to, any great meal you can think of, any combination of children or idea of how to raise them – somebody’s already thought of. But nobody’s ever discovered an opiate receptor before.”

- Candace Pert1

 

Shortly before starting graduate school in pharmacology at Johns Hopkins University in 1970, Candace Pert broke her back in a riding accident2. She took morphine to treat the pain, and subsequently became curious as to how this miraculous drug acted to produce such profound analgesic effects.

Read More

What does cocaine do in the brain?

Not all drugs can completely change who we are. Cocaine is one of the few with this power. Like many other psychoactive drugs, cocaine was first used as an anesthetic, but its potential effect on one’s mind and will was soon discovered and overshadowed its original usage. Cocaine’s power does not lie within the molecule itself, but rather in its interaction with the brain’s reward system (see a previous TBT post for the discovery of this system).

Read More

The winter blues: Is it all in your head?

“February is my favorite month.” said no one living in Boston ever. The short days, cold temperatures, and repetitive snow really throw a dagger (presumably made of ice) into good times. I tend to think of Dec-Feb as my hibernating months; I am more lethargic, less motivated, and my fiancé and labmates can vouch for the fact that I am slightly more irritable than the good natured loving person I always am in better weather.  I’ve come to attribute my noticeable seasonal downswing to Seasonal Affective Disorder, or SAD (an acronym that ironically makes me quite happy), a self-diagnosis I probably made from seeing a commercial. Being the curious graduate student that I am I decided to do a little research on the subject and see what I could learn—really trying to go above and beyond what pharmaceutical advertising taught me.

Read More

Tagging a snapshot of life with prions

“As you know, in most areas of science, there are long periods of beginning before we really make progress.” – Eric Kandel

In a typical maze experiment, a hungry rat enters a moderately complicated maze, in which it does its best to find a “reward room” with food. After some guesses, the rat finds its way, consumes the food, and is returned to the entrance of the maze. From then on, the rat makes fewer bad guesses and finds the food faster after each round. Eventually, it completely masters the maze layout and finds the perfect route every time. To explain this improvement, scientists have coined the term reward reinforcement, which essentially suggests that the reward that the rat collects at the end reinforces its correct choices, until it eventually learns a perfect route. This model may sound very simple, but is it?

The rat collects its food in the reward room, not during the navigation or decision-making. Therefore, the brain’s reward system does not turn on until the end of each run. How, then, does the reward system backtrack in time and selectively reinforce choices made in the past? If we had the ability to look into the rat’s brain and watch the activities of the synapses (the connections between neurons), we would find, as the rat navigates through the maze, millions of synapses flash on and off, while millions more remain silent. How does the brain keep track of this enormous, evolving constellation of synaptic activity, so that only the appropriate ones can be tuned up and down by the reward in the end?

One way to solve this problem is to have a logging system, in which individual synapses log their activity over time, like hourglasses that are flipped on (to let sand through) only when their corresponding synapses are active and flipped upside-down when inactive. In this way, when the reward arrives, the active synapses can be singled out by reading the hourglasses. This idea is called synaptic tagging.

What is the nature of this activity tag? One strong candidate arises from work done by Kausik Si (pronounced “See”) and Nobel Laureate Eric Kandel. It’s an RNA binding protein called CPEB (not to be confused with CREB, which is a different protein that is also important for memory). CPEB is turned on in active synapses and is necessary for maintaining long-term memory in different animal species1,2.

Fig1

How does CPEB tag synaptic activity? Si and Kandel’s answer initially flabbergasted the entire neuroscience community: it is a prion. Prions are most famous for causing spongy brains in the mad cow disease. Many proteins can be prions, as long as they spontaneously form strong aggregates (i.e. clumps) and expand the aggregates by converting single proteins into the self-aggregating form. Proteins involved in many neurodegenerative diseases, such as Alzheimer’s and Parkinson’s diseases, have recently been thought to resemble prions. However, the CPEB aggregates are not the harbingers of disease but rather the functional essence of the protein. The aggregation of CPEB only happens in active synapses, and blocking the aggregation, either by antibody neutralization or by mutation of the gene, reduces synaptic plasticity and the animal’s ability to form long-term memory3–5.

Initial insights into the function of CPEB come from the observation that its aggregate binds RNA4. A recent paper by the Si group proposes that the aggregation of CPEB promotes memory formation by switching CPEB’s function from suppressing to promoting protein translation6. This switching phenomenon depends on CPEB’s ability to recruit other protein complexes to modify the stability of RNAs.

Is CPEB the only mechanism for synaptic tagging? Probably not. Studies of CPEB indicate that it is essential for long-term memory, on the timescale from hours to days. However, tagging synapses on a shorter time scale is also needed to explain problems such as delayed reinforcement (described above). There is strong evidence that synapses can be tagged and tuned concertedly within a much shorter timeframe, although scientists do not yet know how this happens7. When this mystery is revealed, another gasp of surprise will be heard reverberating across the neuroscience community.

 

References:

  1. Si, K. et al. A neuronal isoform of CPEB regulates local protein synthesis and stabilizes synapse-specific long-term facilitation in aplysia. Cell 115, 893–904 (2003).
  2. Keleman, K., Krüttner, S., Alenius, M. & Dickson, B. J. Function of the Drosophila CPEB protein Orb2 in long-term courtship memory. Nat. Neurosci. 10, 1587–93 (2007).
  3. Majumdar, A. et al. Critical role of amyloid-like oligomers of Drosophila Orb2 in the persistence of memory. Cell 148, 515–29 (2012).
  4. Si, K., Lindquist, S. & Kandel, E. R. A Neuronal Isoform of the Aplysia CPEB Has Prion-Like Properties. Cell 115, 879–891 (2003).
  5. Si, K., Choi, Y.-B., White-Grindley, E., Majumdar, A. & Kandel, E. R. Aplysia CPEB Can Form Prion-like Multimers in Sensory Neurons that Contribute to Long-Term Facilitation. Cell 140, 421–435 (2010).
  6. Khan, M. R. et al. Amyloidogenic Oligomerization Transforms Drosophila Orb2 from a Translation Repressor to an Activator. Cell 163, 1468–1483 (2015).
  7. He, K. et al. Distinct Eligibility Traces for LTP and LTD in Cortical Synapses. Neuron 88, 528–538 (2015).

 

 

Zero degrees of separation

How connectomics is revealing the intricacies of neural networks, an interview with Josh Morgan

 

3D-EM

On October 1st of 2015 the Human Genome Project (HGP) celebrated its 25th birthday. Six long years of planning and debating preceded its birth (1990), and at the young age of 10 the HGP fulfilled its potential by providing us with a ‘rough draft’ of the genome. In 2012, 692 collaborators published in Nature the sequence of 1092 human genomes1. All of this happened a mere 50 years after Watson and Crick first described the double stranded helix of DNA. In retrospect genomics had a surprisingly quick history, but by the numbers it was an effort of epic proportions, and a highly debated one. The promise of a complete sequence of the human genome was thrilling, but many were concerned. Some argued that the methods were unreliable and even unfeasible, others were concerned that a single genome couldn’t possibly represent the spectrum of human diversity, and yet others thought the task was overly ambitious and too time and money consuming.

Nevertheless, in the early 2000s genomics was taking over the scientific world and in its trail support was growing for the other –omics: proteomics, metabolomics, and last but not least connectomics. The connectome is a precise high definition map of the brain, its cells and the connections between them. While human connectomics uses fMRI (functional magnetic resonance imaging) and EEG (electroencephalography) to define neural connections, electron microscopy (EM) is leading the way in generating detailed 3D images of the brain of model organisms (C. Elegans, drosophila, and mouse) with nanometer resolution. Connectomics divided the scientific community into supporters and skeptics, and many of the same arguments were used as in the debate on the HGP in the late 1980s.

In 2013, Josh Morgan and Jeff Lichtman addressed head-on the main criticisms against connectomics in mouse2, arguing that obtaining a complete map of the brain would provide information about the structure of circuits that would be otherwise unattainable. Several labs embarked on an odyssey to fulfill the potential of 3D EM in the mouse brain. The last few years have seen a rapid succession in the latest improvements on optimizing the complex and multistep method. Put simply the procedure consists of fixing a piece of tissue, slicing it (at 29 nm thickness), imaging the sequence of sections, combining all the high resolution images into tiled stacks. This process has been sped up to take approximately 100 days for 2000 slices of a square millimeter. At this point the digital representation of the cube of tissue still needs to be segmented and cells within it traced, before analysis can be done on this dataset.

A monumental amount of work, yet a flurry of studies presenting reconstructed cubes of tissue from mouse brain have already been published. Most notable this year is the work from the Max Planck Institute for neurobiology4 on retinal tissue, and from the lab of Jeff Lichtman at Harvard University3, that published the complete reconstruction of a small (1500 μm3) volume of neocortex. Once obtaining such a large high-resolution dataset is no longer a limiting factor, what can this dataset tell us about brain connectivity?

I had the pleasure of seeing the potential of 3D EM during a talk that Josh Morgan (a postdoctoral fellow in the Lichtman lab) gave at the Center for Brain Science at Harvard University. He showed his work on scanning, segmenting and analyzing a piece of tissue from the mouse dLGN (dorsal Lateral Geniculate Nucleus). Afterwards he answered some questions about his work in the growing field of connectomics.

Can you briefly list your findings in the dLGN that you think are most representative of the kind of unique discoveries 3D EM allows us to make?

The big advantage of large scale EM is that you can look at many interconnected neurons in the same piece of tissue. At the local level, we could use that ability to find out which properties of retinal ganglion cell synapses were determined by the presynaptic neurons vs. which were determined by the post synaptic cell. At the network level, we found that the dLGN was not a simple relay of parallel channels of visual information. Rather, channels could mix together and split apart. My favorite result from the LGN project so far was seeing a cohort of axons innervate the dendrite of one thalamocortical cell and then jump together onto a dendrite of a second thalamocortical cell to form synapses. It is that sort of coordination between neurons that I think is critical to understanding the nervous system and that is extremely difficult to discover without imaging many cells in the same tissue.

 You showed some really nice analyses that are starting to chop at the vast dataset you created. Is it becoming more challenging to identify overarching patterns, or to synthesize findings? When datasets will expand to include tissue from multiple animals, will it be more challenging to do statistical analyses on them?

There was a critical point in my analysis, after I had traced a network of hundreds of neurons, where it was no longer possible for me to clearly see the organization of the network I had mapped. In that case, it was using a spring force model to organize all the cells into a 2D space that made the network interpretable again. I think visualization tools that make complex data transparent to the biologist studying them are essential to the process. As cellular network data becomes more common, I hope that some [visualization tools] for standard quantitative measures of synaptic network organization will emerge. For now, I think the main goal is giving neuroscientists as complete a view as possible of the circuits that they are studying.

For comparison between individuals, it would be convenient if each neural circuit could be divided into finer and finer gradations of cell types until each grouping was completely homogenous. In that case, comparing individuals just means identifying the same homogeneous groups of neurons in each animal. However, the dLGN data suggests that the brain has not been that considerate and instead can mix and match connectivity and cellular properties in complicated ways. To some extent, it might be possible to replace the list of stereotyped cell subtypes with a list of behaviors that cells of broad classes can perform under various conditions. However, I don’t think you can get around the fact that studying a less predictable system is going to be more difficult and doesn’t lend itself to statistical shortcuts. In particular, if everything is connected to everything, at least by weak connections, then relying on P values will tend to generate lots of false positives. That is, if your test is sensitive enough and you check enough times, wiggling any part of the network will give you a statistically significant effect in any other part of the network.

Technically speaking, you and your colleagues have been working for years on optimizing the method published this summer in Cell3. Can you foresee ways to improve it/speed it up? What remain as the main challenges/drawbacks?

Basically, we would like to acquire larger volumes (intact circuits) and trace more of the volumes that we acquire (more cells). My current dataset was acquired about an order of magnitude faster than the previous dataset that was published in Cell and we have a microscope now that can acquire images more than an order of magnitude faster than the scope I used. That leaves us with the growing problem of large data management and segmentation. It isn’t necessary to analyze every voxel of a dataset in order to get interesting biology (I have only used 1% of my voxels for my first LGN project). However, we all have the goal of eventually automatically segmenting every cell, synapse, and bit of ultrastructure in our 3D volumes. The people in Hanspeter Pfister’s lab have made significant progress in improving automated segmentation algorithms, but generating fast error free automatic segmentations is going to be a long-term computer vision challenge.

A paper came out this summer5 discussing the artifacts seen with EM in chemically fixed tissue, as opposed to cryo fixed. Will these findings impact the method of the Licthman lab, and do you think they are relevant to the value of your dataset?

The critical piece of data for us is connectivity so, to the extent that cryo fixation makes connectivity easier to trace, cryo fixation is a better technique. However, it is difficult to perform cryo fixation on large tissue volumes (>200 um). The alternative is to use cryo fixation as a guide to improving our chemical fixation techniques. For instance, preservation of extracellular space, one of the major benefits of cryo fixation, can also be achieved in some chemical fixation protocols.

Bibliography

  1. The 1000 genomes project consortium (2012) An integrated map of genetic variation from 1092 human genomes, Nature, 491, p 56-65.
  2. Morgan JL & Lichtman JW (2013) Why not connectomics?, Nature Methods, 10(6), p 494-500.
  3. Kasthuri N et al. (2015) Saturated reconstruction of a Volume of Neocrotex, Cell, 162(3), p 648-661.
  4. Berning et al. (2015) SegEM: Efficient image analysis for high-resolution connectomics, Neuron, 87(6), p 1193-1206.
  5. Korogod et al. (2015) Ultrastructural analysis of adult mouse neocortex comparing aldehyde perfusion with cryo fixation, eLife, 4:e05793.

[Throwback Thursdays] The rat that became addicted to shocking its brain

All addictive substances exert their effects by harnessing the powerful reward system that is normally used to guide an animal’s behavior. One simplistic way to build a reward system is to have neurons that carry positive and negative values. These reward centers would then, and have been shown to, be the primary target of the addictive substances. To find these reward centers, researchers must devise a way to stimulate a brain region and ask the animal how it likes it. How can one do this?  

3 rat self stimulating

Olds and Milner did it, in 19542. They made a pivotal assumption that if stimulating a brain region is rewarding, the animal would like to do more of it. In this way, they can just hand over the switch to stimulate an animal’s brain region to the animal itself and see how much it presses the switch. At first, an animal may accidentally deliver the stimulation here and there, but if the stimulation is rewarding, the animal will soon to learn to press the switch more and more, forgetting to eat or sleep. Using this ingenious method, Olds and Milner found that stimulating a region in the middle of the brain called septum (which is now known to be important for stress and mood regulation) is highly addictive.  Over the course of 12 hours of experiments, the rats stimulated their septa incessantly at a rate of 6-30 times a minute. These self-stimulations stopped when the researchers set the voltage of the stimulation to 0, suggesting that this unnatural obsession was nevertheless goal-directed.

3 1000 presses

The legacy of self-stimulation did not stop here. By surveying the brain regions more systematically and changing the stimulation method from electrical to chemical (e.g. cocaine), generations of researchers have generated a comprehensive map of addiction in the brain3.

 

References:

  1. Olds, J. Self-Stimulation of the Brain. Science. 127, 315–324 (1958).
  2. Olds, J. & Milner, P. Positive reinforcement produced by electrical stimulation of septal area and other regions of rat brain. J. Comp. Physiol. Psychol. 47, 419–27 (1954).
  3. Lüscher, C. & Malenka, R. C. Drug-evoked synaptic plasticity in addiction: from molecular changes to circuit remodeling. Neuron 69, 650–63 (2011).

Toward a Molecular Lego Kit for Engineering Specialized Channels

“What I cannot create, I do not understand.” — Richard Feynman (1988)

An organism’s ability to sense the world ultimately relies on specialized proteins in its sensory neurons to probe the external world on behalf of the entire organism. Channels, a group of proteins that act as gatekeepers of ions, are often delegated to the front end of the job. As a result, highly specialized channels, such as those that sense odors, temperature, and even touch, have evolved in all corners of the world. Over the years, the genetic identities of many such channels have been demystified. Our current challenge lies in pinpointing the nanoscopic means by which they sense the world. To achieving this goal, an inevitable path is to locate the intramolecular modules (often referred to as domains) that grant channels their special ability to sense the environment. Several remarkable studies in recent years have made significant progress in attacking this problem.

KcsA

Only a few years ago, in 2011, the Sternson group exploited the properties of specialized domains to engineer new ligand-gated channels, which they called PSAMs2. First, the Sternson group made the critical observation that ligand-gated (i.e. molecule-sensing) ion channels can be divided into two somewhat independent domains, the ligand-binding domain and the ion channel domain. By screening candidate mutations in the ligand-binding domain of a starter channel, they were able to engineer the channel to lose its innate affinity to its natural ligand and acquire a preference for a synthetic molecule. By transplanting this new ligand binding domain onto other excitatory and inhibitory ion channel domains, the Sternson group successfully created novel excitatory and inhibitory channels. These channels now specialize in binding synthetic ligands that have never occurred in any biological system and are used as a tool to manipulate neuron activities.

Recent work by researchers in the Jan labs identified another elegant example of specialized domains in a touch-sensitive channel, NompC3. NompC has a long tail of short, repeated sequences known as Ankyrin repeats. These Ankyrin repeats connect the NompC channel, which resides on the surface of the cell, to the cell’s cytoskeleton, much as a ship’s anchor secures its vessel to the bottom of the ocean. When the cell surface is deformed by touch, it changes the distance between NompCs and the cytoskeleton, causing these Ankyrin chains to pull on the channels, just as a ship’s anchor will pull on the ship when ocean waves begin pushing the ship away. In the case of NompC, however, the Ankyrin chain can actually pull open the channel and, quite unexpectedly, plays an important role in defining the distance between the cell surface and cytoskeleton (i.e. the depth of the ocean in the ship analogy). Finally, transplanting the Ankyrin chain to a touch-insensitive channel renders the new channel touch-sensitive, just like the original NompC channel. The Ankyrin chain can therefore serve as a Lego piece for making touch-sensing channels.

Although specialized domains are common, they are not a requirement for specialized channels. For example, the search for a heat-sensitive domain in the thermosensitive TrpV1 channel has yielded largely non-overlapping regions of the protein. Facing this paradox, the Chanda group4 built on theoretical work that proposed that the temperature-gating properties of a channel might result from its general interaction with the cell membrane5. By changing the membrane-interaction properties of a small number of amino acids in a potassium channel, which is normally temperature insensitive, the researchers were able to create new channels that were just as temperature-sensitive as the natural ones. These manipulations also removed the potassium channel’s intrinsic voltage sensitivity, implying a shared mechanism between these two intuitively different senses.

Perhaps to Mr. Feynman’s disappointment, creating channels on a blackboard using only a handful of principles is still a dream of the future. In all three of these examples, rationality dictates the general directions of the research paths, but the details are left to hard work and serendipity. Nonetheless, the ability to grant new sensing properties to old channels by means of rational design should confer a great sense of achievement, as deserved by those who steal secrets from nature.

References:

  1. Doyle, D. A. et al. The structure of the potassium channel: molecular basis of K+ conduction and selectivity. Science 280, 69–77 (1998).
  1. Magnus, C. J. et al. Chemical and genetic engineering of selective ion channel-ligand interactions. Science 333, 1292–6 (2011).
  1. Zhang, W. et al. Ankyrin Repeats Convey Force to Gate the NOMPC Mechanotransduction Channel. Cell 162, 1391–1403 (2015).
  1. Chowdhury, S., Jarecki, B. W. & Chanda, B. A molecular framework for temperature-dependent gating of ion channels. Cell 158, 1148–58 (2014).
  1. Clapham, D. E. & Miller, C. A thermodynamic framework for understanding temperature sensing by transient receptor potential (TRP) channels. Proc. Natl. Acad. Sci. U. S. A. 108, 19492–7 (2011).

 

 

Cortical Activity is a Mess: The Trouble with Averaging

Neuroscientists recording activity of single neurons in cortex have known for a long time that neural activity in cortex can be extremely variable. Even when the timing of a stimulus presentation or behavioral measurements are tightly controlled, a cortical neuron is likely to fire action potentials at slightly different times and rates on any given trial.

Take a look at this example raster, where each tick represents an action potential, each line is a separate trial, and the green vertical bar is the time when a stimulus was presented (or a behavior was measured):

 spikes

Looking closely at the timing of cortical action potentials, it is striking how noisy the activity appears, despite being aligned to the same event. The distribution of spikes is usually well described as a Poisson process, where the timing of any one spike is independent from the times of other spikes (this is only an estimation, since realistically the timing of spikes is constrained by a neuron’s refractory period - a neuron can’t produce action potentials closer than 1ms apart).

A natural reaction to that variability is to average it out. This has produced sensical results in many cases, and has supported models for various cognitive phenomena. One prominent example that has taken advantage of averaged cortical neuron activity is the Ramping model of decision making. The model attempts to explain how one could make a decision about a sensory stimulus when that stimulus is imperfect, or noisy.

The classic example, used in many neurophysiology experiments, is a video of moving dots, where each dot moves in a random direction at any given time point, but overall is biased to the left or right. An experimental subject looking a display of randomly moving dots has to decide whether they are on the whole moving left or right, and the difficulty of that decision depends on the coherence of motion (at 100% coherence, all dots are moving in one direction; at 0%, there is no net movement). The Ramping model requires some cognitive process to observe the noisy dots and count the evidence for leftward vs. rightward motion over time, until the counts hit some internal threshold, at which time the decision is made.

 

[youtube https://www.youtube.com/watch?v=Cx5Ax68Slvk]

As with many experiments of neural recordings, the phrase “ask and ye shall receive” proved true here (not to underplay how difficult it probably was to gather the data): neurons in the macaque Lateral Intraparietal Area, just downstream of motion-sensitive neurons in area MT, responded to the moving dots with ramping firing rates whose slopes were correlated with the coherence of the dots’ motion1,2, just as the model predicted. 

Those ramping firing rates came from averages of single neurons’ responses over many noisy trials. In a paper published in Science in July 2015, Jonathan Pillow and lab mates asked if the ramping might actually be just an average of many instantaneous jumps in firing rates3. If a neuron’s firing rate jumps instantly but at slightly different time points on each trial, the average firing rate would look like a ramp. To answer this question, Pillow’s team modeled the neurons’ spiking as Poisson processes whose rates of firing were distributed either according to a ramping model or a stepping one.

ramps

Because the stepping wasn’t a measured variable (but rather a latent one imposed on the data by the researchers), in order to get the right parameters for their models, the team simulated neural activity using Markov Chain Monte Carlo (MCMC). 

Not surprisingly, when each trial’s activity was aligned to the inferred step time rather than stimulus onset, the average firing rate turned from a ramp to a step. The stepping model seems to capture the neural activity qualitatively, but is it actually better than the ramping model? This is a crucial question that plagues many models of neural activity. Because the models in this paper were based on Bayesian statistics, with real data serving as a guide for the parameters, the authors were able to directly compare the two models using information criteria that compare how well the models fit the data and how many parameters were needed for each (the more parameters are needed for a model to fit the data, the more likely it is to be overfit; at the extreme end of this spectrum, a model could have as many parameters as there are data points, in which case the model would be useless), and showed that the Stepping model was more parsimonious with the data than the Ramping model.

Sources:

1. Newsome, W. T., Britten, K. H., & Movshon, J. A. (1989). Neuronal correlates of a perceptual decision. Nature.

2. Roitman, J. D., & Shadlen, M. N. (2002). Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. Journal of Neuroscience, 22(21), 9475–9489.

3. Latimer, K. W., Yates, J. L., Meister, M. L. R., Huk, A. C., & Pillow, J. W. (2015). Single-trial spike trains in parietal cortex reveal discrete steps during decision-making. Science, 349(6244), 184–187. http://doi.org/10.1126/science.aaa4056

Oxytocin: sculpting the maternal brain

Humans have a lot in common with prairie voles—at least when it comes to mating. Unlike the vast majority of mammalian species, we often enter into monogamous pair bonds. A crucial molecule involved in determining this mating strategy is oxytocin. Popularly known as the “cuddle hormone,” oxytocin is a neuropeptide that plays an ancient role in orchestrating social and reproductive behaviors [1], and frequently makes headlines because of its ability to influence a variety of interesting behaviors [2]. Until recently, however, it has remained unclear how and where oxytocin is exerting its effects in the brain. Using modern experimental tools, neuroscientists are beginning to develop a more mechanistic understanding of how oxytocin affects specific circuits in the brain. Chemical structure of oxytocin.

Female mice display a variety interesting behaviors that depend on their sexual experience. To an experienced female, pup vocalizations are a highly salient sensory stimulus and drive robust maternal behavior—if a mother hears a distress call from a pup that has been separated from the nest, she will quickly locate the lost pup and bring it back. This is not true for virgin females, who rarely display this behavior. However, virgin females can be made to act in a maternal fashion by systemic oxytocin administration, suggesting that oxytocin may be important for the development of this maternal behavior. In a recent study, researchers uncovered a fascinating circuit mechanism by which oxytocin sculpts the auditory system of new mothers in order for this maternal behavior to arise [3].

As a first clue to where oxytocin may be working to promote pup retrieval behavior, Marlin and colleagues used transgenic mice and immunohistochemistry to visualize where oxytocin receptors are found in the female mouse brain. One interesting location where they were detected, in both experienced mothers and naïve virgin females, was the primary auditory cortex. The receptors were found on both inhibitory interneurons within the auditory cortex and the axon terminals of hypothalamic neurons that secrete oxytocin directly into cortex. Even more intriguing was their finding that receptor expression is lateralized: oxytocin receptors are more densely expressed in the left auditory cortex than in the right, reminiscent of the lateralization of language functions in the human brain.

Next, pharmacology and optogenetics were used to manipulate neural activity in the left vs. right auditory cortex. In virgin females, they found that stimulating oxytocin signaling in the left auditory cortex promoted pup retrieval. In experienced mothers, broad-spectrum inactivation of the left auditory cortex disrupted pup retrieval behavior, but specifically blocking oxytocin signaling had no effect. Why would completely shutting down the primary auditory cortex disrupt behavior, but not disrupting oxytocin signaling specifically? One explanation could be that oxytocin is important for plasticity: in its presence, the circuits of auditory cortex are able to change with experience. These changes may then consolidate into a long-term memory—after that, oxytocin signaling no longer matters.

Female mice who have been given birth are quick to respond to distress calls from pups. Virgin females do not exhibit this maternal care behavior, but can be made to do so by either oxytocin administration, or by housing them in the same cage as a mother and her pups. Figure from Marlin et al. (2015)

To study the effects of oxytocin on activity and plasticity, they next recorded electrical activity from auditory cortex neurons. Compared to virgin females, auditory cortical neurons in maternal mice displayed larger and more reliable responses to pup distress calls. They further showed that, in the presence of oxytocin, pup calls rapidly decreased the amount of inhibition in auditory cortex. By temporarily decreasing inhibition, sensory signals can be boosted in a way that promotes synaptic plasticity, potentially resulting in the formation of new memories. Cortical disinhibition is emerging as a common circuit mechanism that the brain uses for associative learning, and has been causally linked to the acquisition of multiple behavioral functions, including conditioned fear and spatial navigation behaviors [4]. Could oxytocin-induced disinhibition lead to a persistent increase in the salience of pup distress calls in the female brain?

By pairing pup distress calls with stimulation of oxytocin signaling, researchers were able to transform how the auditory cortex of virgin females represented pup calls, making it look more like it does in maternal mice. The basic model works like this: oxytocin decreases the level of inhibition in auditory cortex. In this state of disinhibition, auditory cortical neurons are more responsive to pup calls, and the boosted responses induce plasticity. As oxytocin signaling fades, cortical inhibition returns to normal, stabilizing the oxytocin-enhanced responses to pup calls. In this way, the salience of pup calls can be stably enhanced in the maternal brain. This helps makes sense of why disrupting oxytocin signaling fails to disrupt pup retrieval behavior in experienced, maternal females: their auditory cortex had already consolidated the plastic changes needed for responding to pup distress calls

So what’s happening in natural settings? One possible model of how things work is that oxytocin levels in the maternal brain increase in response to hormonal changes during pregnancy, and by sensory cues from pups, such as pheromones. This increase in oxytocin signaling would render circuits in the maternal brain more plastic and better able to learn to respond to signals from pups. Without oxytocin, the brain’s ability to learn to relevance of these specific social cues would be impoverished. If it turns out to be generally true that oxytocin renders animals more sensitive to social cues it could have implications for our understanding of neurodevelopmental disorders such as autism, where individuals seem to lack the ability to assign importance to social cues.

Sources:

[1] Garrison JL, Macosko EZ, Bernstein S, Pokala N, Albrecht DR, Bargmann CI. Oxytocin/vasopressin-related peptides have an ancient role in reproductive behavior. Science 338, 540-3 (2012).

[2] Shen H. Neuroscience: the hard science of oxytocin. Nature | News Feature 522, 410-2 (2015).

[3] Marlin BJ, Mitre M, D'amour JA, Chao MV, Froemke RC. Oxytocin enables maternal behaviour by balancing cortical inhibition. Nature 520, 499-504 (2015).

[4] Letzkus JJ, Wolff SB, Luthi A. Disinhibition, a Circuit Mechanism for Associative Learning and Memory. Neuron 88, 264-76 (2015).

Brain Celebrities - Remember H.M?

This series of articles will talk about brains that have had major influences on the advancement of neuroscience, not because they belonged to very smart people, but because they told us something new about the functioning of our brain.

Remember H.M.?

What other brain could this series start with than that of H.M.? H.M was probably on the first slide of the first class I took in my neuroscience career. His brain taught us many things at once, but most importantly: where our memory is located in the brain.

Henry-Gustav-Molaison

Henry Gustav Molaison, shortly before his surgery. Credit: http://www.pbs.org/wgbh/nova/body/corkin-hm-memory.html Courtesy of Suzanne Corkin

H.M., whose real name is Henry Gustav Molaison, was born in Hartford, Connecticut in 1926. When he was about 7 years old, he started developing severe epilepsy, possibly as the result of a bicycle accident. His seizures and convulsions became so severe and frequent that he became unable to live a normal life and work his mechanic job.

At the age of 27, he was referred to doctor William Beecher Scoville, a neurologist at the local hospital who tried all possible treatments and decided that brain surgery was the only solution left. In the 1950s, little was known about the structure and morphology of the brain. People did not yet associate specific brain areas with specific functions like we do now. Dr. Scoville therefore knew nothing more than that the seizures seemed to come from the “left and right medial temporal lobes.” In the summer of 1953, he thus decided to cut these away, and removed two finger shaped pieces of tissue from H.M.’s brain.

The surgery turned out to be successful: H.M.’s seizures almost completely disappeared. A new problem had taken its place though, one that would make H.M. the biggest celebrity in neuroscience. By removing the medial temporal lobes on both sides, Dr. Scoville had completely abolished H.M’s ability to form new memories. He could tell you about the 1929 stock market crash, or World War II, but he didn’t know that he had told you that 10 minutes ago already, or what he had had for lunch that day. Everyone he met after the surgery would forever be a stranger to him, even if he would see them every single day.

For the neuroscience community this was an incredible finding. Who would have thought that all of memory formation happened in this one little area that we now call the hippocampus (Latin for seahorse)? Many scientists studied H.M. for years, among whom Dr. Brenda Milner and Dr. Suzanne Corkin. They found that H.M. was still able to learn new motor skills, such as drawing a picture when looking at it through a mirror. Another example that is especially relevant to this series is that H.M. was able to remember the names of celebrities just as well as anyone else. This suggested that only “explicit” memories (conscious memories such as stories), but not “implicit” memories (unconscious memories such as motor skills), were formed and stored in the hippocampus. It suddenly dawned on the scientists that the brain has multiple memory systems that work in parallel with each other. Thanks to H.M., scientists have started to search for locations of other memory systems and found very different parts of the brain that support different types of learning.

Hippocampus-by-Life-Science-Databases-Creative-Commonsseahorse hippocampus

Top: location of the hippocampus in the brain. Bottom: hippocampus dissected out of the brain, revealing it's seahorse-like shape.Credit: http://www.neuroscientificallychallenged.com/blog/2014/5/23/know-your-brain-hippocampus

One last important finding was that, apart from the memory loss, H.M. remained completely himself. He was just as kind, soft-spoken and funny of a man as he was before the surgery and his IQ remained above average, despite his severe amnesia. This again shows that the brain is, at least to a certain extent, “modular”: specific brain areas are responsible for specific things. Damaging one area does not necessarily affect the other areas. This was a very important concept for the neuroscience world at that time, and is still the basis for much current neuroscience research.

H.M. lived with his mother and later with a relative until the 1980’s when he was moved to a nursing home. In 2008 he died of respiratory failure at the age of 82. His brain was moved to the University of San Diego where it was sliced one year later. The corresponding data were made publicly available to scientists by the brain observatory at http://www.thebrainobservatory.org/patient-hm-a-case-study/.

With his amnesia H.M. advanced the field of neuroscience dramatically. He will forever be stored in young neuroscientist’s hippocampi, and later –if they study hard enough- in their long-term memory systems. He will be remembered, H.M.

HM brain frozen

H.M.'s frozen brain during the cutting process in 2009.Credit: UC San Diego School of Medicine

Sources: -http://www.telegraph.co.uk/culture/10047050/Henry-Molaison-The-incredible-story-of-the-man-with-no-memory.html -book: "Permanent present tense. the unforgettable life of the amnesiac patient H.M." by Suzanne Corkin

Love to tan? Blame Coco Chanel & your skin cells.

Coco Chanel, 1920 Sometime in the summer of 1923, a forty-year-old French woman named Gabrielle returned from a holiday cruise in the French Riveria with a sunburn. I imagine this must have been a common occurrence, and surely would have gone unnoticed except for one thing: Gabrielle was actually known as Coco. As in Coco Chanel. The fashion icon’s newly bronzed skin became an instant trend among her fans that subsequently catalyzed a widespread obsession with sun-tanning.

No disrespect to the immense influence of Coco Chanel (apparently she is also credited with freeing women from the suffocating corset), but her sunburn was particularly timely. Not only was the Victorian-era aesthetic reverence for pale skin already fading, but sun exposure was just recently being heralded as the new ‘cure-all’ for a wide variety of diseases and illnesses. So it’s easy to understand the enthusiasm with which people embraced sun-tanning. Good for your health and fashionable - when does that ever happen?

Fast-forward almost a hundred years, however, and our understanding of sun exposure has changed drastically. UV light is a major risk for all types of skin cancer. Shockingly, one study estimated that a higher percentage of people develop skin cancer from repeated indoor tanning than lung cancer from smoking1. That is insane. Yet despite an increased awareness of the link between UV-exposure and skin cancer, sun-tanning persists as an immensely popular pastime for millions of Americans. Continuing to behave a certain way despite knowing there are negative consequences is a hallmark of addiction. Could the obsession with sun-tanning be more serious than an aesthetic adoration for tanned skin? Could it be that UV light somehow directly impinges on the brain’s reward system to induce a biological addiction akin to drugs of abuse?

In the last decade, more and more evidence suggests this is indeed the case. Many UV seekers - those who repeatedly and persistently suntan outdoors and in tanning salons - meet the standard diagnosis requirements for a substance-related disorder. And just like you can’t trick a cocaine addict with baking soda, you can’t trick a tanning addict with UV-free light - they can tell the difference2. What’s even more striking is that sun-tanning isn’t just analogous to substance abuse, it appears like it is substance abuse: specifically, opioid addiction. Opioids as in heroin and prescription painkillers. If given an opioid blocker during UV exposure, some tanning addicts go into withdrawal, suggesting that addiction to UV light is really an addiction to opioids3,4.

Evidently there are no opioid molecules hiding in UV light. So the most straightforward hypothesis is that UV light is somehow triggering the body to produce its own endogenous opioids which ultimately increase dopamine release in the brain to engender reward and addiction. Now figuring out if and how that actually happens is much harder to do in humans, but a recent study using mice showed that exposing mice to UV light (they actually shaved the fur off to maximize skin exposure) increased blood levels of beta-endorphin5. Beta-endorphin is an endogenous opioid most famous for its incredible capacity to act as an analgesic when released following a physical trauma (‘shock’) and for producing euphoric feelings during exercise (‘runner’s high’). It interacts with the same protein that morphine, heroin, and all opioid prescription painkillers bind - the mu-opioid receptor.

It turns out that when cells in the outermost layer of your skin (keratinocytes) are exposed to UV light, a signaling cascade is initiated that results in the production and secretion of beta-endorphin5. The beta-endorphin then diffuses into the blood where it circulates throughout the body, including the brain. Once it gets in the brain - well, then the rest of the story is familiar. Like other opioids that the brain produces, beta-endorphin is known to increase dopamine release to trigger feelings of reward and reinforce behaviors; in excess, this is exactly what causes addiction.

While all the evidence appears to support UV-induced beta-endorphin release as the cause of sun-tanning addiction, the case isn’t fully closed. No study has directly shown that mice repeatedly exposed to UV light will continue to seek it out in the face of negative consequences - a key feature when defining addiction in humans. Thus, it remains to be known whether people really become addicted to UV-induced beta-endorphin release, or if instead it’s just one component of UV-light addiction.

As with all studies done in model organisms, we must be cautious when applying the results found in mice to humans. In particular, one difference between mice and humans seems a little mystifying with respect to studying sun-exposure: mice are nocturnal. So it’s perhaps quite reasonable to postulate that mice evolved specific mechanisms to ensure enough sun exposure to be healthy (i.e. get enough vitamin D), and that humans would not need such a mechanism since we are awake during the day. It would be interesting to compare UV-induced beta-endorphin release in nocturnal and diurnal animals to see if the phenomenon is conserved.

Regardless of this wrinkle, I think it’s safe to strongly suspect UV-light as an addictive substance, and as such should be treated with the same cautiousness and respect as we do other substances of abuse. So while it might have been the celebrity sway of Coco Chanel that started an industry of sun-tanning, its continued persistence despite the discovery of UV light’s carcinogenic properties likely has something more to do with our evolutionarily ancient fondness for opioids rather than just cultural aesthetics.

References

  1. Wehner, M. R. et al. International prevalence of indoor tanning: a systematic review and meta-analysis. JAMA dermatology 150, 390–400 (2014).
  1. Feldman, S. R. et al. Ultraviolet exposure is a reinforcing stimulus in frequent indoor tanners. J. Am. Acad. Dermatol. 51, 45–51 (2004).
  1. Kaur, M., Liguori, A., Fleischer, A. B. & Feldman, S. R. Side effects of naltrexone observed in frequent tanners: could frequent tanners have ultraviolet-induced high opioid levels? J. Am. Acad. Dermatol. 52, 916 (2005).
  1. Kaur, M. et al. Induction of withdrawal-like symptoms in a small randomized, controlled trial of opioid blockade in frequent tanners. J. Am. Acad. Dermatol. 54, 709–11 (2006).
  1. Fell, G. L., Robinson, K. C., Mao, J., Woolf, C. J. & Fisher, D. E. Skin β-endorphin mediates addiction to UV light. Cell 157, 1527–34 (2014).