I spy, I spy with my little eye

For the last couple of years I have been studying the retinal circuits of mice. While it is amazing how similar visual circuitry is among many species, I am always fascinated by surprising unique strategies that have developed in this system. The human visual system (from the retina to visual cortex) is a remarkable network that can see colors, adapt to a wide range of light intensities, perceive depth and distance, and much more. It is perfectly put-together such that each part contributes to a specific function: the lens focuses the image on the retina, different photoreceptors allow for color detection, our two frontal eyes allow for depth perception through parallax. The visual system of some animals has found other strategies to achieve the same functions, sometimes even using the same tools in new ways!

Read More

Consider the Fun

Scientists are often portrayed in pop-culture as pedantic types, with personalities as stiff as their starched white lab coats. While they may have a pressing work ethic and incessant care for detail, their work is creative by nature. Scientists must create knowledge by designing and building experiments. In this way, a scientist is closer to a starving artist than to an automaton.

Read More

True Beauty

“Beauty is truth, truth beauty,” – that is allYe know on earth, and all ye need to know. - John Keats in ‘Ode on a Grecian Urn’

The scientific field prides itself in its objectivity. Truth is found by a search free of personal biases, personal commitments or emotional involvements. Still, a great many scientists have said beauty guided their way. For example, physicist Paul Dirac stated: “It is more important to have beauty in one’s equations than to have them fit the experiment”.

Read More

As simple as random can be

A few weeks ago I was having a discussion about mathematical models for the prediction of the movements of the stock market. The question was whether there was any use to developing complex algorithms trying to predict these fluctuations. My friend (an economist) argued that while he admits the market value isn’t truly random, incorporating random variables may be the best model we have for it. It turns out that many mathematicians (and quants, economists who analyze market fluctuations using algorithms) have been using “random” models for their predictions. These range from sequences randomly drawn from log-normal distributions, to chaotic systems that may allow for the prediction of market crashes and other rare large movements. I was fascinated by the idea of randomness as a model for complex systems. It seemed particularly interesting to explore this in the context of biological processes, especially when the laws of thermodynamics have described that all physical phenomena drift towards the chaotic state of maximum entropy. Could randomness be a model for circuit wiring and function in the brain?

Read More

The deciding brain and the effects of stress

We make decisions every day. Decision-making is a way by which we exert control over our behavior, mood and even the course of our lives. One key element in decision-making is self-control. This is often seen when we have to make that extremely difficult decision between another double cheeseburger and a healthier salad. While that may seem difficult enough on its own, many decisions, such as having to choose which graduate program to join or which answer to circle on an exam, come with substantial amounts of stress. This stress can guide or compromise the decisions we make. So, how do stress and self-control come together during decision-making? What is the neurobiological basis underlying this convergence?

But, before we begin to look at the interaction between stress and decision-making, let us first take a step back and look at the brain regions and circuitry underlying decision-making.

Read More

The secret life of our brains

In our everyday lives we are aware of ourselves, our behavior, and the sensory perception of our environment. This awareness during awake states is known as consciousness. As much as it is central to our brain activity, it has also been one of the greater mysteries of neuroscience. In our lifetimes we all experience changes in our state of consciousness, particularly in the alternation between sleep and wake states. We may also experience changes in consciousness state when fainting, during an epileptic seizure, and through the effects of psychoactive drugs. What is happening in our brains when our conscious selves are not present?

Read More

Animal Welfare in Research

I recently had the opportunity to write a post for Nautilus on a subject that is dear to me - the use of crows and other intelligent members of the corvid family for neuroscience research. Corvid intelligence has been noticed by humans for millennia, and more recently by ethologists and psychologists. The fascinating thing about these animals is that like all birds, they do not have a neocortex - the part of the mammalian brain that has countless times been implicated in intelligence. Now, there is just one lab in the world - Andreas Nieder at the University of Tübingen -  that has started peering into the brains of these fascinating creatures to try to understand how crows’ cortex-less brains enable them to perform amazing cognitive feats. You can read the full story on Nautilus.

Read More

Sharing while caring

Today, let’s throwback to the multiple comparisons problem and relate it to something new: Open Science.

In the past years, more and more researchers, legislators and politicians have started to campaign for an open and transparent scientific conduct. The fact that the majority of scientific articles are locked behind the walls of expensive magazines and thus unreachable for the general tax payer (even though they funded the research) is upsetting. Moreover, the scientific community is struggling with reproducibility – the center for open science reproduced 100 psychology studies and found that only 39% of the effects were rated to have replicated the result of the original study! Sharing raw data and code, and publishing in open access journals can hopefully solve these problems.

Read More

A Primer on Sleep

If you aren’t asleep when the clock strikes three in the early morning, your eyelids get heavy and your brain feels like mush. You still have that paper to finish writing and you want to stay awake but staying awake is a struggle, a fight against our own brain. We have all been there (especially during finals week). With today’s post, lets look at how our brain regulates sleep and why we spend our days alternating between sleep and wakefulness?

Read More

Transitioning puberty

In 2015 The Danish GirlRuby RoseCaitlyn Jenner, and Transparent paved the road for trans-visibility in mainstream media. This has brought a great deal of attention and debate to the medical and political scene, but a large gap still remains between policy making and our understanding of how trans-sexuality develops through childhood and adolescence, and how we can alleviate the pain and discomfort for trans-adolescents of going through the physical changes puberty. This year the NIH launched the largest longitudinal study on long-term psychological and medical effects of puberty suppressors, a drug for sex reassignment therapy for adolescents with gender dysphoria1.

Read More

What does cocaine do in the brain?

Not all drugs can completely change who we are. Cocaine is one of the few with this power. Like many other psychoactive drugs, cocaine was first used as an anesthetic, but its potential effect on one’s mind and will was soon discovered and overshadowed its original usage. Cocaine’s power does not lie within the molecule itself, but rather in its interaction with the brain’s reward system (see a previous TBT post for the discovery of this system).

Read More

The winter blues: Is it all in your head?

“February is my favorite month.” said no one living in Boston ever. The short days, cold temperatures, and repetitive snow really throw a dagger (presumably made of ice) into good times. I tend to think of Dec-Feb as my hibernating months; I am more lethargic, less motivated, and my fiancé and labmates can vouch for the fact that I am slightly more irritable than the good natured loving person I always am in better weather.  I’ve come to attribute my noticeable seasonal downswing to Seasonal Affective Disorder, or SAD (an acronym that ironically makes me quite happy), a self-diagnosis I probably made from seeing a commercial. Being the curious graduate student that I am I decided to do a little research on the subject and see what I could learn—really trying to go above and beyond what pharmaceutical advertising taught me.

Read More

The lessons we learned from a dead fish

Here’s to a relatively recent TBT! In 2010, Craig Bennett and colleagues submitted a poster with the following title:

"Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument For Proper Multiple Comparisons Correction"

Yes, you are reading it right, it is about the neural correlates of a dead fish!

Read More

The neuroscience behind mindfulness

Mindfulness is currently a very hot topic. It seems like every health website, magazine and newspaper is touting the benefits of meditation and yoga practices. Wired posted an article on how meditation can calm the anxious mind and help one manage emotions, Shape magazine relays that meditation can provide greater pain relief than morphine, while many other articles convey that mindfulness will help with weight loss, sleep, disordered eating, and even addiction. Amidst all of the articles promoting mindfulness we also see the backlash—a New York Times op-ed from October 2015 calls for us to take a step back and remember that mindfulness has not been proven to be the panacea to our society.  Personally, as a stressed out graduate student, I wonder if a mindfulness practice would increase my happiness and well-being, and as a neuroscientist I wonder what is true and how does it work, so I recently attended a lecture on the topic given by Dr. Sara Lazar, who works at Harvard Medical School and Mass General Hospital as a leading neuroscientist in the field of meditation.

Dr. Lazar started her “Neural Mechanisms of Mindfulness” lecture with the basics. She defined stress as wanting or expecting things to be different than they actually are, and offered the simple idea that the key to reducing stress is to understand and accept things as they actually are. This acceptance involves making your expectations realistic, acknowledging the imperfection of situations, and finally, knowing that right now in this very moment everything is okay. The last point is indeed, what mindfulness meditation is. Noting that there are many types of mindfulness techniques, Dr. Lazar clarified what she means by mindfulness meditation, telling us it is the practice conscious awareness of the moment—accomplished by focusing on your breath and on the primary sensations you are experiencing, without any judgement of those sensations. The above points show a logical reasoning for why mindfulness meditation could help reduce stress, but what is the neurological evidence? How can scientists prove that one mental behavior is changing your state of mind?

The Lazar research group tackled this question by recruiting people who had never meditated before and splitting them up into two groups: one experimental group had an 8 week mindfulness meditation intervention, and the other control group did not. The 8 week experimental group had a weekly meditation class and a recommended 40 minutes a day of meditation while the control group did not go through the mindfulness classes or meditate at all; thus the study aimed to measure meditation specific effects. The design of this experiment was crucial because (as is the case with any scientific study) without a control group for comparison it is near impossible to make any conclusions about how the experimental conditions are affecting the experimental . At the end of the 8 weeks, the team studied the differences in amount of gray matter within the brains of each person by using fMRI neuroimaging techniques. The participants’ brains were imaged before and after the 8 week duration of the experiment. The results showed that people who practiced mindfulness mediation (but not the control group) had increased their gray matter (compared to their own baseline) in four different regions:  the posterior cingulate (associated with mind-wandering and self-relevance), left hippocampus (important for learning and memory), temporo parietal junction (helps with perspective taking, empathy and compassion), and pons (aids in communication between brain stem and cortex as well as sleep ).  These areas are varied in function (hence the links to explore for yourself!) but to generalize, it appears that meditation is changing the brain in places that are important for focus, empathy and compassion, and emotional regulation. The researchers also reported decreased amgydala gray matter; a brain region associated with fear and perceived stress. [side note: increase in gray matter means an increase in cell body size or dendrite arborization and vice versa for decrease, so it is not a perfect measurement of increasing the function of the area but rather an indirect indication that the area may be more active]  What is solid about this study is that it correlates change in brain structure with the reports from the participants in the study. The group who underwent mindfulness training reported decreased stress, anxiety, mind-wandering and insomnia, as well as increased quality of life as compared to those who did not practice meditation. To make the correlation a little stronger, the researchers also measured cortisol, a stress hormone, and found decreased levels of cortisol within the participants who underwent mindfulness meditation intervention.

holzel

Beyond this initial study,1 Dr. Lazar’s lab has continued to elucidate the neural mechanisms behind self-reported effects of mindfulness mediation. Their group has found that mindfulness meditation decreases bipolar2 and general anxiety disorder symptoms3, and have proposed that increased gray matter in the pons may be the area underlying reports of increased psychological well-being4. This correlation of brain regions with participant self-reporting, using proper controls and a consistent method of mindfulness mediation, really seems to be a good way to begin to understand how we can use our minds to help heal our own minds. The stressed graduate student part of me is convinced enough to give mindfulness mediation a try, and the neuroscientist part of me is excited to see what further investigation tells us as the field continues to ask how quieting our thoughts can alter our brains and even our bodies5.

Dr. Lazar stressed in her lecture that meditation should really be learned properly, as it can be very hard for us to change our state of minds. Meditation is not simply sitting in silence, but is instead a “state of open, nonjudgmental, and nondiscursive attention to the contents of consciousness, whether pleasant or unpleasant”6.If you want to give mediation a try, here is a list of answers to frequently asked questions put together by the Lazar Lab and this is another cool blog post by Sam Harris on various forms of meditation with some tips on how to get started.

Citations

  1. Holzel, B. et al. Mindfulness practice leads to increases in regional brain gray matter density. Psychiatry Res. 191, 36-43 (2011).
  2. Stange, JP et al. Mindfulness-based cognitive therapy for bipolar disorder: effects on cognitive functioning. J Psychiatry Pract. 6, 410-419 (2011).
  3. Holzel, B. et al. Neural mechanisms of symptom improvements in generalized anxiety disorder following mindfulness training. Neuroimage Clin. 2, 448-58 (2013).
  4. Singleton, O. et al. Change in Brainstem Gray Matter Concentration Following a Mindfulness-Based Intervention is Correlated with Improvement in Psychological Well-Being. Front Hum Neurosci. 8, 33. (2014).
  5. https://www.theconnection.tv/sara-lazar-ph-d/
  6. https://www.samharris.org/blog/item/how-to-meditate

 

 

Zero degrees of separation

How connectomics is revealing the intricacies of neural networks, an interview with Josh Morgan

 

3D-EM

On October 1st of 2015 the Human Genome Project (HGP) celebrated its 25th birthday. Six long years of planning and debating preceded its birth (1990), and at the young age of 10 the HGP fulfilled its potential by providing us with a ‘rough draft’ of the genome. In 2012, 692 collaborators published in Nature the sequence of 1092 human genomes1. All of this happened a mere 50 years after Watson and Crick first described the double stranded helix of DNA. In retrospect genomics had a surprisingly quick history, but by the numbers it was an effort of epic proportions, and a highly debated one. The promise of a complete sequence of the human genome was thrilling, but many were concerned. Some argued that the methods were unreliable and even unfeasible, others were concerned that a single genome couldn’t possibly represent the spectrum of human diversity, and yet others thought the task was overly ambitious and too time and money consuming.

Nevertheless, in the early 2000s genomics was taking over the scientific world and in its trail support was growing for the other –omics: proteomics, metabolomics, and last but not least connectomics. The connectome is a precise high definition map of the brain, its cells and the connections between them. While human connectomics uses fMRI (functional magnetic resonance imaging) and EEG (electroencephalography) to define neural connections, electron microscopy (EM) is leading the way in generating detailed 3D images of the brain of model organisms (C. Elegans, drosophila, and mouse) with nanometer resolution. Connectomics divided the scientific community into supporters and skeptics, and many of the same arguments were used as in the debate on the HGP in the late 1980s.

In 2013, Josh Morgan and Jeff Lichtman addressed head-on the main criticisms against connectomics in mouse2, arguing that obtaining a complete map of the brain would provide information about the structure of circuits that would be otherwise unattainable. Several labs embarked on an odyssey to fulfill the potential of 3D EM in the mouse brain. The last few years have seen a rapid succession in the latest improvements on optimizing the complex and multistep method. Put simply the procedure consists of fixing a piece of tissue, slicing it (at 29 nm thickness), imaging the sequence of sections, combining all the high resolution images into tiled stacks. This process has been sped up to take approximately 100 days for 2000 slices of a square millimeter. At this point the digital representation of the cube of tissue still needs to be segmented and cells within it traced, before analysis can be done on this dataset.

A monumental amount of work, yet a flurry of studies presenting reconstructed cubes of tissue from mouse brain have already been published. Most notable this year is the work from the Max Planck Institute for neurobiology4 on retinal tissue, and from the lab of Jeff Lichtman at Harvard University3, that published the complete reconstruction of a small (1500 μm3) volume of neocortex. Once obtaining such a large high-resolution dataset is no longer a limiting factor, what can this dataset tell us about brain connectivity?

I had the pleasure of seeing the potential of 3D EM during a talk that Josh Morgan (a postdoctoral fellow in the Lichtman lab) gave at the Center for Brain Science at Harvard University. He showed his work on scanning, segmenting and analyzing a piece of tissue from the mouse dLGN (dorsal Lateral Geniculate Nucleus). Afterwards he answered some questions about his work in the growing field of connectomics.

Can you briefly list your findings in the dLGN that you think are most representative of the kind of unique discoveries 3D EM allows us to make?

The big advantage of large scale EM is that you can look at many interconnected neurons in the same piece of tissue. At the local level, we could use that ability to find out which properties of retinal ganglion cell synapses were determined by the presynaptic neurons vs. which were determined by the post synaptic cell. At the network level, we found that the dLGN was not a simple relay of parallel channels of visual information. Rather, channels could mix together and split apart. My favorite result from the LGN project so far was seeing a cohort of axons innervate the dendrite of one thalamocortical cell and then jump together onto a dendrite of a second thalamocortical cell to form synapses. It is that sort of coordination between neurons that I think is critical to understanding the nervous system and that is extremely difficult to discover without imaging many cells in the same tissue.

 You showed some really nice analyses that are starting to chop at the vast dataset you created. Is it becoming more challenging to identify overarching patterns, or to synthesize findings? When datasets will expand to include tissue from multiple animals, will it be more challenging to do statistical analyses on them?

There was a critical point in my analysis, after I had traced a network of hundreds of neurons, where it was no longer possible for me to clearly see the organization of the network I had mapped. In that case, it was using a spring force model to organize all the cells into a 2D space that made the network interpretable again. I think visualization tools that make complex data transparent to the biologist studying them are essential to the process. As cellular network data becomes more common, I hope that some [visualization tools] for standard quantitative measures of synaptic network organization will emerge. For now, I think the main goal is giving neuroscientists as complete a view as possible of the circuits that they are studying.

For comparison between individuals, it would be convenient if each neural circuit could be divided into finer and finer gradations of cell types until each grouping was completely homogenous. In that case, comparing individuals just means identifying the same homogeneous groups of neurons in each animal. However, the dLGN data suggests that the brain has not been that considerate and instead can mix and match connectivity and cellular properties in complicated ways. To some extent, it might be possible to replace the list of stereotyped cell subtypes with a list of behaviors that cells of broad classes can perform under various conditions. However, I don’t think you can get around the fact that studying a less predictable system is going to be more difficult and doesn’t lend itself to statistical shortcuts. In particular, if everything is connected to everything, at least by weak connections, then relying on P values will tend to generate lots of false positives. That is, if your test is sensitive enough and you check enough times, wiggling any part of the network will give you a statistically significant effect in any other part of the network.

Technically speaking, you and your colleagues have been working for years on optimizing the method published this summer in Cell3. Can you foresee ways to improve it/speed it up? What remain as the main challenges/drawbacks?

Basically, we would like to acquire larger volumes (intact circuits) and trace more of the volumes that we acquire (more cells). My current dataset was acquired about an order of magnitude faster than the previous dataset that was published in Cell and we have a microscope now that can acquire images more than an order of magnitude faster than the scope I used. That leaves us with the growing problem of large data management and segmentation. It isn’t necessary to analyze every voxel of a dataset in order to get interesting biology (I have only used 1% of my voxels for my first LGN project). However, we all have the goal of eventually automatically segmenting every cell, synapse, and bit of ultrastructure in our 3D volumes. The people in Hanspeter Pfister’s lab have made significant progress in improving automated segmentation algorithms, but generating fast error free automatic segmentations is going to be a long-term computer vision challenge.

A paper came out this summer5 discussing the artifacts seen with EM in chemically fixed tissue, as opposed to cryo fixed. Will these findings impact the method of the Licthman lab, and do you think they are relevant to the value of your dataset?

The critical piece of data for us is connectivity so, to the extent that cryo fixation makes connectivity easier to trace, cryo fixation is a better technique. However, it is difficult to perform cryo fixation on large tissue volumes (>200 um). The alternative is to use cryo fixation as a guide to improving our chemical fixation techniques. For instance, preservation of extracellular space, one of the major benefits of cryo fixation, can also be achieved in some chemical fixation protocols.

Bibliography

  1. The 1000 genomes project consortium (2012) An integrated map of genetic variation from 1092 human genomes, Nature, 491, p 56-65.
  2. Morgan JL & Lichtman JW (2013) Why not connectomics?, Nature Methods, 10(6), p 494-500.
  3. Kasthuri N et al. (2015) Saturated reconstruction of a Volume of Neocrotex, Cell, 162(3), p 648-661.
  4. Berning et al. (2015) SegEM: Efficient image analysis for high-resolution connectomics, Neuron, 87(6), p 1193-1206.
  5. Korogod et al. (2015) Ultrastructural analysis of adult mouse neocortex comparing aldehyde perfusion with cryo fixation, eLife, 4:e05793.

Brain Celebrities

Squealer - the silenced pig

This is a blog post about a character close to my heart: the pig. During most of my childhood I collected pigs. My whole room was filled with little statues, pictures, books, cups and stickers of pigs. I had fallen in love with their intelligence, cute ugliness and laissez-faire attitude. Little did I know that my favorite childhood animal provided a huge insight into my adult career field!

Squealer_In_Command Squealer in the 1954 adaptation film of Animal Farm, figure taken from http://villains.wikia.com/wiki/Squealer

Two hundred years A.C., the Greeks were strongly divided over where our thoughts, feelings and behaviors lie: was it in the heart or the brain? Most followed Aristotle, who had the following strong arguments for why the heart must be the site1:

  • The heart connects to all sense organs through blood vessels, whereas the brain does not. (Actually it does, but nerves can be hard to see compared to blood vessels)
  • In embryos, the heart develops before the brain.
  • All animals have a heart but not all animals have a brain. (Most animals do have brains, although some can be difficult to recognize. And indeed there are some animals without brains.)
  • The heart is sensitive to touch and responds to emotions, whereas the brain does not.
  • You need blood for sensation (everyone who has slept on their arm can vouch for this!), and this is provided by the heart.

These were indeed fair points. Proponents of the brain as the site of our mind used medical cases, showing that brain injury can change behavior to argue their case, but they lacked any direct evidence. This changed when medical doctor Galen silenced a squealing pig…

Galen was born in 129 A.C. in a town called Pergamon and quickly grew up to a successful physician, serving gladiators and many famous people like Marcus Aurelius’s son, the emperors Commodus and Septimus Severus2. His interactions with gladiators brought him a variety of interesting medical injuries - including brain injuries - and strengthened his belief that the brain is controlling our thoughts and behaviors.

Alongside his work as a physician, Galen also did a lot of research. He mostly studied anatomy, with a particular interest in the nervous system. He used oxes and macaque monkeys to study the brain and nerve cord. In addition to that he did vivisections (dissections of living animals) on pigs, “to avoid seeing the unpleasant expression of the ape when it is being vivisected”3. Galen was the first to do vivisections. They allowed him to pinpoint the specific functions of the different nerves he found.

One day, during the vivisection of a badly struggling pig –which, since it is now one of our brain celebrities, I will call Squealer – Galen accidentally cut the recurrent laryngeal nerves. To his surprise, the pig stopped squealing, but continued to struggle. This was the first causal evidence that nerves, coming from the brain, control specific behaviors. Without intact nerves, the muscles in the larynx (voice box) could not move the vocal cords of the pig, making him unable to squeal. The heart and blood vessels were clearly intact, showing that it was undeniably the brain that was making Squealer scream for his life.

Many other pigs followed Squealer’s fate in public demonstrations that Galen did to prove to his opponents that the brain controls our behavior. Quickly after Galen’s death, the Middle Ages kicked in and scientific debates silenced. However, during the Renaissance, his writings were found, and the silenced pig once again became one of the most famous examples of the link between brain and behavior.

galen| Image from the bottom panel of the title page to the 1541 Junta edition of Galen's Works. Figure taken from https://commons.wikimedia.org/wiki/File:Galen-Pig-Vivisection.jpg

Directly damaging the nervous system is still an often-used technique in neuroscience. By killing or silencing the activity of specific neurons or brain regions we can figure out their function in a circuit or a behavior. A complex part of these experiments is often that changes in behavior are difficult to interpret.

If damage to an area X leads to change in behavior, the most straightforward conclusion would be that the area X is necessary for this behavior. However, it could also be that this area is in close contact with another brain region (Y) that actually controls the behavior. Area X changes the activity of area Y, which leads to a behavioral change, but area X actually does not control the behavior.

On the other hand, damage to area X could lead to no notable change. It would be reasonable to then conclude that area X is not involved in this behavior. However, it could also be that area X plays a major role, but that there are other area’s involved that can take over area X’s function when it is damaged.

A lab in our department found a very strong example of this. Many studies have shown that the motor cortex (including the laryngeal motor cortex) is responsible for controlling motor actions. However, they found that rats could execute a learned complex motor task perfectly after the complete motor cortex had been lesioned4. The lab also recently published a study showing that the specific way in which you silence an area can strongly influence your conclusions5.

Thus, although Galen’s strategy of damaging the nervous system directly is effective, the brain is usually too complex to draw simple conclusions from this type of experiment!

Squealer taught the Greeks that the brain controls our actions, and served as an inspiration for the field of experimental neuroscience. Although nowadays the pig is not a frequently used model organism in neuroscience, pigs are crucial to other medical sciences. Since the body of a pig is in many ways the closest you can get to a human body, pigs were for a long time a crucial source of insulin, and pig organs have been thoroughly studied to understand human ones. In addition, we will probably be able to transplant genetically adapted pig organs into humans in the near future, saving tens of thousands lives a year of people who die waiting for a suitable human donor6.

By stopping to squeal, Squealer was the first in a long row of pigs advancing neural and medical sciences. So say a little thank you to the Christmas ham this week!

 

References

  1. C. Gross. A hole in the head: more tales in the history of neuroscience. The Cambridge, Massachusetts, MIT Press, 2009
  2. https://www.princeton.edu/~cggross/neuroscientist_4_98_216.pdf
  3. Siegel, R.E. Galen’s system of physiology and medicine. New York: S.Karger, 1968
  4. http://olveczkylab.oeb.harvard.edu/files/olveczky/files/kawai_2015_neuron_w_si.pdf?m=1449065249
  5. http://www.nature.com/nature/journal/v528/n7582/full/nature16442.html
  6. http://www.nature.com/news/new-life-for-pig-to-human-transplants-1.18768

Science, red in tooth and claw?

Science is competitive—very competitive. We compete with our scientific peers for funding. Departments and institutions compete for the best talent. There can even be competition between colleagues within the same lab. The currency of success is high-impact publications. Principal Investigators need to publish as senior author in order to obtain funding and secure tenure, and students and postdocs need first author (ideally first first author) publications in order to some day receive a tenure-track position. Given that competition is both intense and widespread at every level of professional science, it is a subject worth giving serious thought.

Most people would agree that competition, at least in certain forms, is a force for good because it can foster both efficiency and creativity. On the other hand, when competition becomes too intense, it can incentivize us to sacrifice rigor for speed, leading to sloppy science and even fraud. There is also intense competition for resources (Fig. 1). As funding has become scarcer (Fig. 2), competition has intensified in ways that may be contributing to an increase scientific misconduct1. Since 1975, there has been an approximately tenfold increase the proportion of papers retracted due to fraud (rather than human error)2 and there is growing concern about the reproducibility of published results3. Even though we can't be certain of the causes, the increasing rate of paper retractions should give us pause (Fig. 3).

the-national-institutes-of-health-nih-funding-trends-fy19952013-10-638

Figure 1: Success rates of scientists applying for NIH research grants.

NIHfunding-fig2

Figure 2: NIH funding levels 1950-present (source).

chart

Figure 3: Rate of retracted papers, 1977-2011. Bars show the total number of publications by year; line shows retraction rate. (source)

As a PhD student, I have witnessed the ways in which intense competition can affect both individuals and the culture of science generally. Being scooped by another group is one of the worst experiences someone can have, especially if significant time and effort has been invested into a project. Because scientists naturally love discussing ongoing experiments, groups in the same field commonly learn of other labs’ work ahead of publication. This typically leads competing groups to work at an exhausting pace as they race for the prize of being the first to demonstrate something. I have also heard senior PIs comment on the ways in which scientific culture seems to be changing. At least in certain fields, scientists seem to be less eager to present data from ongoing work at meetings unless it has already been published or submitted to a journal. This potential cultural shift is troubling because hearing about cutting-edge work is one of the most thrilling parts of science.

Competition can also be one of the more exciting aspects of the scientific process. Many professional scientists become so enthralled by their work that they also become experts on the history of their field. We commonly speak of our favorite scientists in heroic terms and recall our favorite experiments with great fondness. The same passion that can drive extreme competitiveness also drives a strong investment in the perfection of our craft. Our passion may also lead us to dramatize big rivalries, past and present. Some of the most famous figures in the history of neurobiology, Santiago Ramón y Cajal and Camillo Golgi, are remembered in part for their long and often bitter rivalry with one another. Rivalry is also a popular topic of modern neuroscientific gossip. Who, for example, will be included in the likely Nobel Prize for optogenetics (see this and this for an overview on how these discoveries transpired, and hints at the underlying controversy)?

At the end of the day, the scientific process is a human process. Like any other human affair, scientific progress is built on the backs of groups and individuals driven by different motivations. While intense competition is an inevitable and even desirable force for insuring innovation and progress, it is important for us to remain cognizant of the ways in which the pressures of being on the cutting-edge can affect the honesty and rigor that distinguish science from other human institutions.

 

References

[1] Lam, B. A Scientific Look at Bad Science. The Atlantic (Sept. 2015). http://www.theatlantic.com/magazine/archive/2015/09/a-scientific-look-at-bad-science/399371

[2] Fang, F.C. et al. Misconduct accounts for the majority of retracted scientific publications. PNAS 109, 17028-33 (2012).

[3] Nature special report. Challenges in irreproducible research. http://www.nature.com/news/reproducibility-1.17552

Savants - Unleashing our true potential?

 

Brain Celebrities  

Savants - Unleashing our true potential?  

Tom was born blind and unable to do the physical labor the other slaves did on James Neil Bethune’s plantation. Instead, he was left to roam around freely. From a young age, he imitated animal sounds and repeated complete 10-minute conversations, even though his autism made it almost impossible for him to communicate his own wishes. When he was 4 years old, he sat at the piano on which Bethune’s daughters played and started producing beautiful tunes. He quickly developed into a famous pianist, playing more than 12 hours a day, composing his own music and repeating complete pieces after only hearing them once.

In the first article of this series, I spoke about Henry Molaison, who lost his short-term memory after his hippocampus was taken out. Most of the people that will be discussed in this series have lost some specific function, simply because brain damage usually leads to the disruption of well-working processes. However, in some very special cases, brain disease or damage leads to the gain of a new skill or talent, in addition to the deficits in normal functions. This week, I will talk about savants: people who acquired a very special talent because something went wrong in their brain.

Blind Tom, who lived from 1849 until 1908, is one of the oldest and most famous savants. The oldest reported case of a savant was the autistic Jedediah Buxton, who lived from 1707-1772 and was known as ‘the human calculator’. Buxton could calculate with numbers going up to 39 figures without even being able to write down his own name.

Although around half of savants are autistic, most people with autism are not savants. Scientists estimate that about 1 in 10 people with autism spectrum disorder show some type of savant skills1. However, these are usually splinter skills: an obsessive occupation with, and memorization of, objects, historical facts, numbers or music. There are currently less than 100 truly talented savants who possess skills that are extremely outstanding, even for a healthy person to have. These people are called Prodigious Savants1.

Just for fun, let’s look at some more examples of prodigious savants:

  • Laurence Kim Peek is probably the most famous prodigious savant. He inspired the savant character in the movie ‘Rain Man’. Kim Peek was born with brain abnormalities that impaired his physical coordination and his ability to reason. It also gave him an incredible talent to memorize. During his lifetime, Peek read more than 12,000 books, reading two pages at the same time (one with his left eye and the other with his right). He was able to recall everything he read, even years later.
  • Stephen Wiltshire was born autistic and developed a talent for drawing buildings from memory. He started by drawing cars and animals at the age of five but soon became obsessed with drawing famous buildings in his hometown London, at the age of seven. In 2006 he was flown over Rome with a helicopter. That one ride was enough for him to draw out all of Rome, as you can see here.
  • Leslie Lemke was born with birth defects that left him so severely handicapped that he was unable to take simple actions such as eating or dressing himself. However, by the age of sixteen he suddenly played a piece by Tchaikovsky perfectly, without any piano training, after hearing it once on the television. He became a famous musician, playing all kinds of music styles, only needing to hear a piece once before being able to play it.

As you can notice from these examples, savant skills seem to fall in a couple of categories: art, music and mathematics, accompanied by a great memory and eye for detail. Another thing you might have noticed is that all examples I gave are male. Savants are predominantly male, the ratio currently estimated at 6:11. One last characteristic of savants is that they are obsessed with their skill, often doing little else but this one specific thing.

Although savants have long intrigued neuroscientists, relatively little research has been done on what is causing their outstanding talents. One difficulty is that savants have a wide variety of brain diseases and brain injuries, affecting different brain areas and starting at different times in development. Although most savants are born with the brain impairments and develop their talent at a very young age, there are also cases of acute savants, who develop their talent after acquiring instant brain damage or dementia, causing some scientists to believe that we all have these talents hidden in us.

A common theme among savants is damage to the cortex of the left hemisphere. This region contains areas responsible for multiple functions, including language2. Indeed many savants have difficulties with language, often starting to speak at a very late age and referring to themselves in the third person. Scientists who believe that savant skills are related to left cortical damage argue that their theory is strengthened by the fact that most savants are male: testosterone causes the left hemisphere to develop more slowly, making males more vulnerable to prenatal left cortical damage than females (also see Nick's article about lateralization!).

However, left cortical damage cannot account for all savants. There are many savants that do not have left cortical damage, for example autistic savants. Furthermore, there are also lots of people with left cortical damage who do not have savant skills. The exact neural cause of savant skills is thus still unknown.

So, what have savants taught us then?

I would say the main thing we have learned from savants is that the brain is capable of doing incredible things. As I mentioned earlier, the existence of acute savants has led people to believe that perhaps we all have these skills in us. If that were the case, then perhaps one day we will be able to unleash them!

However, we have also learned from savants that these talents always come at a cost. Most prodigious savants are so severely handicapped that they cannot live by themselves. It thus seems like there might be a balance in our brain between having these special talents and having the ability to do many ordinary things.

Learning more about this balance and about how we can potentially disrupt it temporily instead of permanently might one day turn all of us into superhumans. However, at this point we are human, studying the brain, reading about savants, and dreaming of our true potential.

 

References

1) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2677584/

2) http://www.sciencedirect.com/science/article/pii/S1053811905024511

 

A great article about more brain celebrities:

http://www.motherjones.com/environment/2014/06/inquiring-minds-sam-kean-neuroscience-patients

A nice movie about Kim Peek:

https://www.youtube.com/watch?v=k2T45r5G3kA