If you aren’t asleep when the clock strikes three in the early morning, your eyelids get heavy and your brain feels like mush. You still have that paper to finish writing and you want to stay awake but staying awake is a struggle, a fight against our own brain. We have all been there (especially during finals week). With today’s post, lets look at how our brain regulates sleep and why we spend our days alternating between sleep and wakefulness?Read More
“Any way you can make love, somebody’s already thought of. Any crazy caper you can get up to, any great meal you can think of, any combination of children or idea of how to raise them – somebody’s already thought of. But nobody’s ever discovered an opiate receptor before.”
- Candace Pert1
Shortly before starting graduate school in pharmacology at Johns Hopkins University in 1970, Candace Pert broke her back in a riding accident2. She took morphine to treat the pain, and subsequently became curious as to how this miraculous drug acted to produce such profound analgesic effects.Read More
In 2015 The Danish Girl, Ruby Rose, Caitlyn Jenner, and Transparent paved the road for trans-visibility in mainstream media. This has brought a great deal of attention and debate to the medical and political scene, but a large gap still remains between policy making and our understanding of how trans-sexuality develops through childhood and adolescence, and how we can alleviate the pain and discomfort for trans-adolescents of going through the physical changes puberty. This year the NIH launched the largest longitudinal study on long-term psychological and medical effects of puberty suppressors, a drug for sex reassignment therapy for adolescents with gender dysphoria1.Read More
This episode of Brain Celebrities will take us to the mountainous highlands of New Guinea, where the Fore people live. Until the 1960’s, the Fore had an interesting habit: they ate their dead. This gruesome tradition might help us understand neurological diseases such as Creutzfeldt-Jakob and Alzheimer’s disease.Read More
Patients with a damaged retina or visual cortex often report hole(s) in their sight. However inconvenient they may seem, these holes in many cases do not bother the patients and are sometimes not noticed at all. How do they block these holes from their awareness? In fact, this is a question that we should all ask ourselves, because we all have natural blind spots in our visual perception. This blind spot is caused by a small region in the back of our eyes that contains no retinal neurons. Instead, this region is dedicated for the retinal output neurons to send signals to the brain. Therefore, we walk around with two holes (one on each side) in our visual field. How do we not notice them, even when we try seeing with only one eye at a time?Read More
What are dreams, and why do we have them? People have probably been asking these questions since the dawn of reflective thought, but it wasn’t until the 1950s that scientists first identified neurophysiological correlates of dreaming. A classic paper by Aserinsky and Kleitman1 in 1953 marked the discovery of what we now refer to as Rapid Eye Movement (REM) sleep (Figure 1). Together with non-Rapid Eye Movement (NREM) sleep, REM sleep if one of the two major sleep states that humans and other mammals pass through multiple times during each sleep episode. REM sleep is the state associated with the vivid, hallucinatory dream experiences that we (sometimes) remember after waking.Read More
Not all drugs can completely change who we are. Cocaine is one of the few with this power. Like many other psychoactive drugs, cocaine was first used as an anesthetic, but its potential effect on one’s mind and will was soon discovered and overshadowed its original usage. Cocaine’s power does not lie within the molecule itself, but rather in its interaction with the brain’s reward system (see a previous TBT post for the discovery of this system).Read More
“February is my favorite month.” said no one living in Boston ever. The short days, cold temperatures, and repetitive snow really throw a dagger (presumably made of ice) into good times. I tend to think of Dec-Feb as my hibernating months; I am more lethargic, less motivated, and my fiancé and labmates can vouch for the fact that I am slightly more irritable than the good natured loving person I always am in better weather. I’ve come to attribute my noticeable seasonal downswing to Seasonal Affective Disorder, or SAD (an acronym that ironically makes me quite happy), a self-diagnosis I probably made from seeing a commercial. Being the curious graduate student that I am I decided to do a little research on the subject and see what I could learn—really trying to go above and beyond what pharmaceutical advertising taught me.Read More
How do we learn and remember? How does our nervous system change to allow learning and memory? These are questions that neuroscientists are still tackling to this very day. Today we will go back to 1973 and look at one of the first papers by Thomas Carew and Eric Kandel addressing these questions.Read More
It is no linguistic coincidence that high temperature and spiciness share the same word in the English dictionary: they induce the same burning sensation. The biological basis for this commonality was discovered in 1997 by David Julius’s group at UCSF1.Read More
Here’s to a relatively recent TBT! In 2010, Craig Bennett and colleagues submitted a poster with the following title:
"Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument For Proper Multiple Comparisons Correction"
Yes, you are reading it right, it is about the neural correlates of a dead fish!Read More
Late one night, a woman strolls through an urban park, and a belligerent man yells at her to come to the bench where he is sitting. The woman casually walks over to the man, who puts a knife to her throat. In a gentle voice she tells him, "If you're going to kill me, you're gonna have to go through my God's angels first." The man is so freaked out by this statement and the women's calm demeanor that he immediately lets her go. The next day, she takes another solitary walk through the park, as if nothing had happened.1 The woman described above, who is still alive today, is a famous case study in the neuroscience of human emotion. If you didn't know any better, little about her would strike you as odd - extremely outgoing and somewhat coquettish, but certainly not pathological. In fact, professional psychologists naive to her case don't seem to notice much about her that is strange. After speaking to her about her life and experiences, which include many personal hardships, they describe her as being a "survivor" with "exceptional coping skills." After learning that this woman is patient S.M., the same psychologists reinterpret their assessments of the woman: she is now said to display, "an abnormally low level of negative emotional phenomenology".2Read More
“As you know, in most areas of science, there are long periods of beginning before we really make progress.” – Eric Kandel
In a typical maze experiment, a hungry rat enters a moderately complicated maze, in which it does its best to find a “reward room” with food. After some guesses, the rat finds its way, consumes the food, and is returned to the entrance of the maze. From then on, the rat makes fewer bad guesses and finds the food faster after each round. Eventually, it completely masters the maze layout and finds the perfect route every time. To explain this improvement, scientists have coined the term reward reinforcement, which essentially suggests that the reward that the rat collects at the end reinforces its correct choices, until it eventually learns a perfect route. This model may sound very simple, but is it?
The rat collects its food in the reward room, not during the navigation or decision-making. Therefore, the brain’s reward system does not turn on until the end of each run. How, then, does the reward system backtrack in time and selectively reinforce choices made in the past? If we had the ability to look into the rat’s brain and watch the activities of the synapses (the connections between neurons), we would find, as the rat navigates through the maze, millions of synapses flash on and off, while millions more remain silent. How does the brain keep track of this enormous, evolving constellation of synaptic activity, so that only the appropriate ones can be tuned up and down by the reward in the end?
One way to solve this problem is to have a logging system, in which individual synapses log their activity over time, like hourglasses that are flipped on (to let sand through) only when their corresponding synapses are active and flipped upside-down when inactive. In this way, when the reward arrives, the active synapses can be singled out by reading the hourglasses. This idea is called synaptic tagging.
What is the nature of this activity tag? One strong candidate arises from work done by Kausik Si (pronounced “See”) and Nobel Laureate Eric Kandel. It’s an RNA binding protein called CPEB (not to be confused with CREB, which is a different protein that is also important for memory). CPEB is turned on in active synapses and is necessary for maintaining long-term memory in different animal species1,2.
How does CPEB tag synaptic activity? Si and Kandel’s answer initially flabbergasted the entire neuroscience community: it is a prion. Prions are most famous for causing spongy brains in the mad cow disease. Many proteins can be prions, as long as they spontaneously form strong aggregates (i.e. clumps) and expand the aggregates by converting single proteins into the self-aggregating form. Proteins involved in many neurodegenerative diseases, such as Alzheimer’s and Parkinson’s diseases, have recently been thought to resemble prions. However, the CPEB aggregates are not the harbingers of disease but rather the functional essence of the protein. The aggregation of CPEB only happens in active synapses, and blocking the aggregation, either by antibody neutralization or by mutation of the gene, reduces synaptic plasticity and the animal’s ability to form long-term memory3–5.
Initial insights into the function of CPEB come from the observation that its aggregate binds RNA4. A recent paper by the Si group proposes that the aggregation of CPEB promotes memory formation by switching CPEB’s function from suppressing to promoting protein translation6. This switching phenomenon depends on CPEB’s ability to recruit other protein complexes to modify the stability of RNAs.
Is CPEB the only mechanism for synaptic tagging? Probably not. Studies of CPEB indicate that it is essential for long-term memory, on the timescale from hours to days. However, tagging synapses on a shorter time scale is also needed to explain problems such as delayed reinforcement (described above). There is strong evidence that synapses can be tagged and tuned concertedly within a much shorter timeframe, although scientists do not yet know how this happens7. When this mystery is revealed, another gasp of surprise will be heard reverberating across the neuroscience community.
- Si, K. et al. A neuronal isoform of CPEB regulates local protein synthesis and stabilizes synapse-specific long-term facilitation in aplysia. Cell 115, 893–904 (2003).
- Keleman, K., Krüttner, S., Alenius, M. & Dickson, B. J. Function of the Drosophila CPEB protein Orb2 in long-term courtship memory. Nat. Neurosci. 10, 1587–93 (2007).
- Majumdar, A. et al. Critical role of amyloid-like oligomers of Drosophila Orb2 in the persistence of memory. Cell 148, 515–29 (2012).
- Si, K., Lindquist, S. & Kandel, E. R. A Neuronal Isoform of the Aplysia CPEB Has Prion-Like Properties. Cell 115, 879–891 (2003).
- Si, K., Choi, Y.-B., White-Grindley, E., Majumdar, A. & Kandel, E. R. Aplysia CPEB Can Form Prion-like Multimers in Sensory Neurons that Contribute to Long-Term Facilitation. Cell 140, 421–435 (2010).
- Khan, M. R. et al. Amyloidogenic Oligomerization Transforms Drosophila Orb2 from a Translation Repressor to an Activator. Cell 163, 1468–1483 (2015).
- He, K. et al. Distinct Eligibility Traces for LTP and LTD in Cortical Synapses. Neuron 88, 528–538 (2015).
This Thursday let’s throw it all the way back to 1868 when a doctor named John M. Harlow finally had enough. For twenty years he had endured incredulous disbelief at his initial report of a patient – a man named Phineas Gage - who survived a tamping iron exploding through his head. Now that the man had unfortunately passed away, and the doctor had the good fortune of procuring the skull as indubitable proof of the event, he compiled his notes and published a case study:
You can read it here in full (and I strongly encourage you to do so. Far from the dry, esoteric tone of many academic journals today, the paper is full of action, drama, and opinion). Here are some personal highlights.
Harlow begins the paper acknowledging that many doctors and surgeons believed the story of Phineas Gage to be physiologically impossible….
…but now he’s about to change everyone’s mind.
He goes on to recount the incident of the tamping iron blasting through Gage’s brain in great detail, as well as the immediate aftermath (he got up immediately with little assistance, was able to walk up a flight of stairs, and upon seeing the doctor (Harlow), proclaimed ‘I hope I am not much hurt’). His recovery over the next few months is astonishing. But what’s even more incredible than his survival is the observation that, upon not dying, Gage appeared to live on with a different personality than he had previous to the injury. Harlow observes:
His description of Gage's selective deficits in executive functioning and decorum is among (if not exclusively) the first insights into understanding the role of the prefrontal cortex. Harlow's observations above were also a defining moment for the burgeoning theory of 'cerebral localization': that specific parts of the brain are specialized in different functions (now a fact we take for granted, this was a controversial and hotly debated topic in the 20th century). That last sentence - 'In his regard his mind was radically changed, so decidedly that his friends and acquaintances said he was "no longer Gage"' could probably be considered somewhat of a birthplace for 'cognitive neuroscience'. Harlow's account (in combination with others) was so influential that the change in Gage's personality following his brain damage has become inextricably mired in our biological understanding of what it means to be oneself.
The impetus for writing this case study twenty years following the incident was the unfortunate event of Gage's death. Harlow was saddened to learn of his death, and gravely disappointed that an autopsy had not been performed to analyze the condition of Gage's cortex. However, Gage's mother entrusted his skull to Harlow, which bore the gruesome evidence of the injury decades before. Armed with this new data, Harlow finally felt ready to defend his initial account of the incident which he had published 18 years previously and had been widely discredited as impossible. He included drawings of the skull and tamping iron to scale.
Harlow ends on a philosophical note on the recuperative powers of nature and the role of the intervening surgeon.
It is difficult to imagine a case study published in the 21st century concluding in such a manner.
- Harlow, J.M. Recovery From The Passage Of An Iron Bar Through The Head. Massachusetts Medical Society, 3 June 1868
Mindfulness is currently a very hot topic. It seems like every health website, magazine and newspaper is touting the benefits of meditation and yoga practices. Wired posted an article on how meditation can calm the anxious mind and help one manage emotions, Shape magazine relays that meditation can provide greater pain relief than morphine, while many other articles convey that mindfulness will help with weight loss, sleep, disordered eating, and even addiction. Amidst all of the articles promoting mindfulness we also see the backlash—a New York Times op-ed from October 2015 calls for us to take a step back and remember that mindfulness has not been proven to be the panacea to our society. Personally, as a stressed out graduate student, I wonder if a mindfulness practice would increase my happiness and well-being, and as a neuroscientist I wonder what is true and how does it work, so I recently attended a lecture on the topic given by Dr. Sara Lazar, who works at Harvard Medical School and Mass General Hospital as a leading neuroscientist in the field of meditation.
Dr. Lazar started her “Neural Mechanisms of Mindfulness” lecture with the basics. She defined stress as wanting or expecting things to be different than they actually are, and offered the simple idea that the key to reducing stress is to understand and accept things as they actually are. This acceptance involves making your expectations realistic, acknowledging the imperfection of situations, and finally, knowing that right now in this very moment everything is okay. The last point is indeed, what mindfulness meditation is. Noting that there are many types of mindfulness techniques, Dr. Lazar clarified what she means by mindfulness meditation, telling us it is the practice conscious awareness of the moment—accomplished by focusing on your breath and on the primary sensations you are experiencing, without any judgement of those sensations. The above points show a logical reasoning for why mindfulness meditation could help reduce stress, but what is the neurological evidence? How can scientists prove that one mental behavior is changing your state of mind?
The Lazar research group tackled this question by recruiting people who had never meditated before and splitting them up into two groups: one experimental group had an 8 week mindfulness meditation intervention, and the other control group did not. The 8 week experimental group had a weekly meditation class and a recommended 40 minutes a day of meditation while the control group did not go through the mindfulness classes or meditate at all; thus the study aimed to measure meditation specific effects. The design of this experiment was crucial because (as is the case with any scientific study) without a control group for comparison it is near impossible to make any conclusions about how the experimental conditions are affecting the experimental . At the end of the 8 weeks, the team studied the differences in amount of gray matter within the brains of each person by using fMRI neuroimaging techniques. The participants’ brains were imaged before and after the 8 week duration of the experiment. The results showed that people who practiced mindfulness mediation (but not the control group) had increased their gray matter (compared to their own baseline) in four different regions: the posterior cingulate (associated with mind-wandering and self-relevance), left hippocampus (important for learning and memory), temporo parietal junction (helps with perspective taking, empathy and compassion), and pons (aids in communication between brain stem and cortex as well as sleep ). These areas are varied in function (hence the links to explore for yourself!) but to generalize, it appears that meditation is changing the brain in places that are important for focus, empathy and compassion, and emotional regulation. The researchers also reported decreased amgydala gray matter; a brain region associated with fear and perceived stress. [side note: increase in gray matter means an increase in cell body size or dendrite arborization and vice versa for decrease, so it is not a perfect measurement of increasing the function of the area but rather an indirect indication that the area may be more active] What is solid about this study is that it correlates change in brain structure with the reports from the participants in the study. The group who underwent mindfulness training reported decreased stress, anxiety, mind-wandering and insomnia, as well as increased quality of life as compared to those who did not practice meditation. To make the correlation a little stronger, the researchers also measured cortisol, a stress hormone, and found decreased levels of cortisol within the participants who underwent mindfulness meditation intervention.
Beyond this initial study,1 Dr. Lazar’s lab has continued to elucidate the neural mechanisms behind self-reported effects of mindfulness mediation. Their group has found that mindfulness meditation decreases bipolar2 and general anxiety disorder symptoms3, and have proposed that increased gray matter in the pons may be the area underlying reports of increased psychological well-being4. This correlation of brain regions with participant self-reporting, using proper controls and a consistent method of mindfulness mediation, really seems to be a good way to begin to understand how we can use our minds to help heal our own minds. The stressed graduate student part of me is convinced enough to give mindfulness mediation a try, and the neuroscientist part of me is excited to see what further investigation tells us as the field continues to ask how quieting our thoughts can alter our brains and even our bodies5.
Dr. Lazar stressed in her lecture that meditation should really be learned properly, as it can be very hard for us to change our state of minds. Meditation is not simply sitting in silence, but is instead a “state of open, nonjudgmental, and nondiscursive attention to the contents of consciousness, whether pleasant or unpleasant”6.If you want to give mediation a try, here is a list of answers to frequently asked questions put together by the Lazar Lab and this is another cool blog post by Sam Harris on various forms of meditation with some tips on how to get started.
- Holzel, B. et al. Mindfulness practice leads to increases in regional brain gray matter density. Psychiatry Res. 191, 36-43 (2011).
- Stange, JP et al. Mindfulness-based cognitive therapy for bipolar disorder: effects on cognitive functioning. J Psychiatry Pract. 6, 410-419 (2011).
- Holzel, B. et al. Neural mechanisms of symptom improvements in generalized anxiety disorder following mindfulness training. Neuroimage Clin. 2, 448-58 (2013).
- Singleton, O. et al. Change in Brainstem Gray Matter Concentration Following a Mindfulness-Based Intervention is Correlated with Improvement in Psychological Well-Being. Front Hum Neurosci. 8, 33. (2014).
How connectomics is revealing the intricacies of neural networks, an interview with Josh Morgan
On October 1st of 2015 the Human Genome Project (HGP) celebrated its 25th birthday. Six long years of planning and debating preceded its birth (1990), and at the young age of 10 the HGP fulfilled its potential by providing us with a ‘rough draft’ of the genome. In 2012, 692 collaborators published in Nature the sequence of 1092 human genomes1. All of this happened a mere 50 years after Watson and Crick first described the double stranded helix of DNA. In retrospect genomics had a surprisingly quick history, but by the numbers it was an effort of epic proportions, and a highly debated one. The promise of a complete sequence of the human genome was thrilling, but many were concerned. Some argued that the methods were unreliable and even unfeasible, others were concerned that a single genome couldn’t possibly represent the spectrum of human diversity, and yet others thought the task was overly ambitious and too time and money consuming.
Nevertheless, in the early 2000s genomics was taking over the scientific world and in its trail support was growing for the other –omics: proteomics, metabolomics, and last but not least connectomics. The connectome is a precise high definition map of the brain, its cells and the connections between them. While human connectomics uses fMRI (functional magnetic resonance imaging) and EEG (electroencephalography) to define neural connections, electron microscopy (EM) is leading the way in generating detailed 3D images of the brain of model organisms (C. Elegans, drosophila, and mouse) with nanometer resolution. Connectomics divided the scientific community into supporters and skeptics, and many of the same arguments were used as in the debate on the HGP in the late 1980s.
In 2013, Josh Morgan and Jeff Lichtman addressed head-on the main criticisms against connectomics in mouse2, arguing that obtaining a complete map of the brain would provide information about the structure of circuits that would be otherwise unattainable. Several labs embarked on an odyssey to fulfill the potential of 3D EM in the mouse brain. The last few years have seen a rapid succession in the latest improvements on optimizing the complex and multistep method. Put simply the procedure consists of fixing a piece of tissue, slicing it (at 29 nm thickness), imaging the sequence of sections, combining all the high resolution images into tiled stacks. This process has been sped up to take approximately 100 days for 2000 slices of a square millimeter. At this point the digital representation of the cube of tissue still needs to be segmented and cells within it traced, before analysis can be done on this dataset.
A monumental amount of work, yet a flurry of studies presenting reconstructed cubes of tissue from mouse brain have already been published. Most notable this year is the work from the Max Planck Institute for neurobiology4 on retinal tissue, and from the lab of Jeff Lichtman at Harvard University3, that published the complete reconstruction of a small (1500 μm3) volume of neocortex. Once obtaining such a large high-resolution dataset is no longer a limiting factor, what can this dataset tell us about brain connectivity?
I had the pleasure of seeing the potential of 3D EM during a talk that Josh Morgan (a postdoctoral fellow in the Lichtman lab) gave at the Center for Brain Science at Harvard University. He showed his work on scanning, segmenting and analyzing a piece of tissue from the mouse dLGN (dorsal Lateral Geniculate Nucleus). Afterwards he answered some questions about his work in the growing field of connectomics.
Can you briefly list your findings in the dLGN that you think are most representative of the kind of unique discoveries 3D EM allows us to make?
The big advantage of large scale EM is that you can look at many interconnected neurons in the same piece of tissue. At the local level, we could use that ability to find out which properties of retinal ganglion cell synapses were determined by the presynaptic neurons vs. which were determined by the post synaptic cell. At the network level, we found that the dLGN was not a simple relay of parallel channels of visual information. Rather, channels could mix together and split apart. My favorite result from the LGN project so far was seeing a cohort of axons innervate the dendrite of one thalamocortical cell and then jump together onto a dendrite of a second thalamocortical cell to form synapses. It is that sort of coordination between neurons that I think is critical to understanding the nervous system and that is extremely difficult to discover without imaging many cells in the same tissue.
You showed some really nice analyses that are starting to chop at the vast dataset you created. Is it becoming more challenging to identify overarching patterns, or to synthesize findings? When datasets will expand to include tissue from multiple animals, will it be more challenging to do statistical analyses on them?
There was a critical point in my analysis, after I had traced a network of hundreds of neurons, where it was no longer possible for me to clearly see the organization of the network I had mapped. In that case, it was using a spring force model to organize all the cells into a 2D space that made the network interpretable again. I think visualization tools that make complex data transparent to the biologist studying them are essential to the process. As cellular network data becomes more common, I hope that some [visualization tools] for standard quantitative measures of synaptic network organization will emerge. For now, I think the main goal is giving neuroscientists as complete a view as possible of the circuits that they are studying.
For comparison between individuals, it would be convenient if each neural circuit could be divided into finer and finer gradations of cell types until each grouping was completely homogenous. In that case, comparing individuals just means identifying the same homogeneous groups of neurons in each animal. However, the dLGN data suggests that the brain has not been that considerate and instead can mix and match connectivity and cellular properties in complicated ways. To some extent, it might be possible to replace the list of stereotyped cell subtypes with a list of behaviors that cells of broad classes can perform under various conditions. However, I don’t think you can get around the fact that studying a less predictable system is going to be more difficult and doesn’t lend itself to statistical shortcuts. In particular, if everything is connected to everything, at least by weak connections, then relying on P values will tend to generate lots of false positives. That is, if your test is sensitive enough and you check enough times, wiggling any part of the network will give you a statistically significant effect in any other part of the network.
Technically speaking, you and your colleagues have been working for years on optimizing the method published this summer in Cell3. Can you foresee ways to improve it/speed it up? What remain as the main challenges/drawbacks?
Basically, we would like to acquire larger volumes (intact circuits) and trace more of the volumes that we acquire (more cells). My current dataset was acquired about an order of magnitude faster than the previous dataset that was published in Cell and we have a microscope now that can acquire images more than an order of magnitude faster than the scope I used. That leaves us with the growing problem of large data management and segmentation. It isn’t necessary to analyze every voxel of a dataset in order to get interesting biology (I have only used 1% of my voxels for my first LGN project). However, we all have the goal of eventually automatically segmenting every cell, synapse, and bit of ultrastructure in our 3D volumes. The people in Hanspeter Pfister’s lab have made significant progress in improving automated segmentation algorithms, but generating fast error free automatic segmentations is going to be a long-term computer vision challenge.
A paper came out this summer5 discussing the artifacts seen with EM in chemically fixed tissue, as opposed to cryo fixed. Will these findings impact the method of the Licthman lab, and do you think they are relevant to the value of your dataset?
The critical piece of data for us is connectivity so, to the extent that cryo fixation makes connectivity easier to trace, cryo fixation is a better technique. However, it is difficult to perform cryo fixation on large tissue volumes (>200 um). The alternative is to use cryo fixation as a guide to improving our chemical fixation techniques. For instance, preservation of extracellular space, one of the major benefits of cryo fixation, can also be achieved in some chemical fixation protocols.
- The 1000 genomes project consortium (2012) An integrated map of genetic variation from 1092 human genomes, Nature, 491, p 56-65.
- Morgan JL & Lichtman JW (2013) Why not connectomics?, Nature Methods, 10(6), p 494-500.
- Kasthuri N et al. (2015) Saturated reconstruction of a Volume of Neocrotex, Cell, 162(3), p 648-661.
- Berning et al. (2015) SegEM: Efficient image analysis for high-resolution connectomics, Neuron, 87(6), p 1193-1206.
- Korogod et al. (2015) Ultrastructural analysis of adult mouse neocortex comparing aldehyde perfusion with cryo fixation, eLife, 4:e05793.
In my research on the rat visual system, I have been designing an apparatus that would allow me to record neuronal responses to visual stimuli in freely moving rats. Most visual neuroscience experiments are now performed on restrained animals, who are usually treated with different drugs to suppress movement (anesthetics, muscle relaxants). But as anyone who has tried reading while falling asleep knows, just because your eyes are open does not mean that information is getting through to the brain. It makes more sense to study how neurons respond to images when the research subject is awake and paying attention.
While few researchers are studying vision in unrestrained rats today, I was surprised to find that the basic setup I have been working on for my experiments had already been created — in 1980’s Soviet Russia.
Working at the Moscow State University, Sergei Girman wanted to study the visual system in freely moving animals. So Girman chose to perform his experiments on rats, noting two features that made them convenient to use - “the eyes in this animal are relatively immobile,” making it easy to know where they are looking (researchers go through a lot of trouble training a monkey to look at computer monitors in visual experiments), "while the visual analyzer is well developed” (analyzer being perhaps the fashionable word of the time to refer, in this case, to the visual areas of the brain).
The goal of Girman’s 1985 paper was to compare the responses of neurons in primary visual cortex in awake and attentive rats compared to those in restrained, anesthetized rats.
He outfitted the rats with a metal platform, glued to the head, that contained electrodes to record neural activity. He then trained the animals to enter a vestibule that would constrain the head, so that when the visual stimuli were presented on a computer screen, the head would always be in the same position. While this may sound gruesome, by today’s experimental standards Girman’s procedure was quite original.
At the time, most experiments in visual neurophysiology were done in one long session - researchers would prepare the animal, insert electrodes and present visual stimuli for hours on end; at the end of the session, the electrodes were removed and the animal euthanized (in some cases the experiment could be repeated, but the electrodes would be placed in different brain areas each time). Girman was interested, in part, in recording the activity of neurons for long periods of time. If a neuron responds to particular visual stimuli today, would it respond to the same stimuli in the same way again, tomorrow? To be able to answer questions like this, Girman needed to construct an apparatus that would stay on the animal’s head for months without causing so much damage that the body would reject the implant.
Today, researchers routinely record neuronal activity in rodents for months at a time (techniques such as calcium imaging allow one to examine the activity of the same neurons from one day to the next), but the surgical procedure of attaching head implants is quite drastic. In most cases, the animals are scalped (nonviolently, of course), holes are drilled in the skull, and after the electrodes are inserted into the brain, the scalp is replaced with glue (usually dental cement). Researchers take great care to perform such procedures in a sterile environment to reduce the risk of infections. Inevitably, however, after months (or in lucky cases, perhaps a year), the implants fall off.
Girman’s original solution to this problem was to not scalp the animals in the first place. Instead, he would only make holes large enough for the electrodes to pass through (0.12 mm, according to his paper). Then, he would create a platform for the electronic equipment by threading stiff metal wires under the scalp. While this sounds like a less invasive solution, it must be quite difficult to perform (although I haven’t tried it in my own experiments yet).
Girman’s papers are quite fascinating, partially because of his unique methods, which make me wonder if they really do work better than today's established techniques. Is it just the no one read Girman’s work (which was originally published in Soviet Russian journals and translated later)? Or is Girman's idea of keeping the scalp intact really not all that better than removing it?
It is both encouraging and frustrating to learn about obscure research techniques: the wheel does get reinvented over and over, but perhaps we learn something new each time.
(Please tweet us @harvardneurosci if you have trouble accessing the paper)