Science, red in tooth and claw?

Science is competitive—very competitive. We compete with our scientific peers for funding. Departments and institutions compete for the best talent. There can even be competition between colleagues within the same lab. The currency of success is high-impact publications. Principal Investigators need to publish as senior author in order to obtain funding and secure tenure, and students and postdocs need first author (ideally first first author) publications in order to some day receive a tenure-track position. Given that competition is both intense and widespread at every level of professional science, it is a subject worth giving serious thought.

Most people would agree that competition, at least in certain forms, is a force for good because it can foster both efficiency and creativity. On the other hand, when competition becomes too intense, it can incentivize us to sacrifice rigor for speed, leading to sloppy science and even fraud. There is also intense competition for resources (Fig. 1). As funding has become scarcer (Fig. 2), competition has intensified in ways that may be contributing to an increase scientific misconduct1. Since 1975, there has been an approximately tenfold increase the proportion of papers retracted due to fraud (rather than human error)2 and there is growing concern about the reproducibility of published results3. Even though we can't be certain of the causes, the increasing rate of paper retractions should give us pause (Fig. 3).

the-national-institutes-of-health-nih-funding-trends-fy19952013-10-638

Figure 1: Success rates of scientists applying for NIH research grants.

NIHfunding-fig2

Figure 2: NIH funding levels 1950-present (source).

chart

Figure 3: Rate of retracted papers, 1977-2011. Bars show the total number of publications by year; line shows retraction rate. (source)

As a PhD student, I have witnessed the ways in which intense competition can affect both individuals and the culture of science generally. Being scooped by another group is one of the worst experiences someone can have, especially if significant time and effort has been invested into a project. Because scientists naturally love discussing ongoing experiments, groups in the same field commonly learn of other labs’ work ahead of publication. This typically leads competing groups to work at an exhausting pace as they race for the prize of being the first to demonstrate something. I have also heard senior PIs comment on the ways in which scientific culture seems to be changing. At least in certain fields, scientists seem to be less eager to present data from ongoing work at meetings unless it has already been published or submitted to a journal. This potential cultural shift is troubling because hearing about cutting-edge work is one of the most thrilling parts of science.

Competition can also be one of the more exciting aspects of the scientific process. Many professional scientists become so enthralled by their work that they also become experts on the history of their field. We commonly speak of our favorite scientists in heroic terms and recall our favorite experiments with great fondness. The same passion that can drive extreme competitiveness also drives a strong investment in the perfection of our craft. Our passion may also lead us to dramatize big rivalries, past and present. Some of the most famous figures in the history of neurobiology, Santiago Ramón y Cajal and Camillo Golgi, are remembered in part for their long and often bitter rivalry with one another. Rivalry is also a popular topic of modern neuroscientific gossip. Who, for example, will be included in the likely Nobel Prize for optogenetics (see this and this for an overview on how these discoveries transpired, and hints at the underlying controversy)?

At the end of the day, the scientific process is a human process. Like any other human affair, scientific progress is built on the backs of groups and individuals driven by different motivations. While intense competition is an inevitable and even desirable force for insuring innovation and progress, it is important for us to remain cognizant of the ways in which the pressures of being on the cutting-edge can affect the honesty and rigor that distinguish science from other human institutions.

 

References

[1] Lam, B. A Scientific Look at Bad Science. The Atlantic (Sept. 2015). http://www.theatlantic.com/magazine/archive/2015/09/a-scientific-look-at-bad-science/399371

[2] Fang, F.C. et al. Misconduct accounts for the majority of retracted scientific publications. PNAS 109, 17028-33 (2012).

[3] Nature special report. Challenges in irreproducible research. http://www.nature.com/news/reproducibility-1.17552

Savants - Unleashing our true potential?

 

Brain Celebrities  

Savants - Unleashing our true potential?  

Tom was born blind and unable to do the physical labor the other slaves did on James Neil Bethune’s plantation. Instead, he was left to roam around freely. From a young age, he imitated animal sounds and repeated complete 10-minute conversations, even though his autism made it almost impossible for him to communicate his own wishes. When he was 4 years old, he sat at the piano on which Bethune’s daughters played and started producing beautiful tunes. He quickly developed into a famous pianist, playing more than 12 hours a day, composing his own music and repeating complete pieces after only hearing them once.

In the first article of this series, I spoke about Henry Molaison, who lost his short-term memory after his hippocampus was taken out. Most of the people that will be discussed in this series have lost some specific function, simply because brain damage usually leads to the disruption of well-working processes. However, in some very special cases, brain disease or damage leads to the gain of a new skill or talent, in addition to the deficits in normal functions. This week, I will talk about savants: people who acquired a very special talent because something went wrong in their brain.

Blind Tom, who lived from 1849 until 1908, is one of the oldest and most famous savants. The oldest reported case of a savant was the autistic Jedediah Buxton, who lived from 1707-1772 and was known as ‘the human calculator’. Buxton could calculate with numbers going up to 39 figures without even being able to write down his own name.

Although around half of savants are autistic, most people with autism are not savants. Scientists estimate that about 1 in 10 people with autism spectrum disorder show some type of savant skills1. However, these are usually splinter skills: an obsessive occupation with, and memorization of, objects, historical facts, numbers or music. There are currently less than 100 truly talented savants who possess skills that are extremely outstanding, even for a healthy person to have. These people are called Prodigious Savants1.

Just for fun, let’s look at some more examples of prodigious savants:

  • Laurence Kim Peek is probably the most famous prodigious savant. He inspired the savant character in the movie ‘Rain Man’. Kim Peek was born with brain abnormalities that impaired his physical coordination and his ability to reason. It also gave him an incredible talent to memorize. During his lifetime, Peek read more than 12,000 books, reading two pages at the same time (one with his left eye and the other with his right). He was able to recall everything he read, even years later.
  • Stephen Wiltshire was born autistic and developed a talent for drawing buildings from memory. He started by drawing cars and animals at the age of five but soon became obsessed with drawing famous buildings in his hometown London, at the age of seven. In 2006 he was flown over Rome with a helicopter. That one ride was enough for him to draw out all of Rome, as you can see here.
  • Leslie Lemke was born with birth defects that left him so severely handicapped that he was unable to take simple actions such as eating or dressing himself. However, by the age of sixteen he suddenly played a piece by Tchaikovsky perfectly, without any piano training, after hearing it once on the television. He became a famous musician, playing all kinds of music styles, only needing to hear a piece once before being able to play it.

As you can notice from these examples, savant skills seem to fall in a couple of categories: art, music and mathematics, accompanied by a great memory and eye for detail. Another thing you might have noticed is that all examples I gave are male. Savants are predominantly male, the ratio currently estimated at 6:11. One last characteristic of savants is that they are obsessed with their skill, often doing little else but this one specific thing.

Although savants have long intrigued neuroscientists, relatively little research has been done on what is causing their outstanding talents. One difficulty is that savants have a wide variety of brain diseases and brain injuries, affecting different brain areas and starting at different times in development. Although most savants are born with the brain impairments and develop their talent at a very young age, there are also cases of acute savants, who develop their talent after acquiring instant brain damage or dementia, causing some scientists to believe that we all have these talents hidden in us.

A common theme among savants is damage to the cortex of the left hemisphere. This region contains areas responsible for multiple functions, including language2. Indeed many savants have difficulties with language, often starting to speak at a very late age and referring to themselves in the third person. Scientists who believe that savant skills are related to left cortical damage argue that their theory is strengthened by the fact that most savants are male: testosterone causes the left hemisphere to develop more slowly, making males more vulnerable to prenatal left cortical damage than females (also see Nick's article about lateralization!).

However, left cortical damage cannot account for all savants. There are many savants that do not have left cortical damage, for example autistic savants. Furthermore, there are also lots of people with left cortical damage who do not have savant skills. The exact neural cause of savant skills is thus still unknown.

So, what have savants taught us then?

I would say the main thing we have learned from savants is that the brain is capable of doing incredible things. As I mentioned earlier, the existence of acute savants has led people to believe that perhaps we all have these skills in us. If that were the case, then perhaps one day we will be able to unleash them!

However, we have also learned from savants that these talents always come at a cost. Most prodigious savants are so severely handicapped that they cannot live by themselves. It thus seems like there might be a balance in our brain between having these special talents and having the ability to do many ordinary things.

Learning more about this balance and about how we can potentially disrupt it temporily instead of permanently might one day turn all of us into superhumans. However, at this point we are human, studying the brain, reading about savants, and dreaming of our true potential.

 

References

1) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2677584/

2) http://www.sciencedirect.com/science/article/pii/S1053811905024511

 

A great article about more brain celebrities:

http://www.motherjones.com/environment/2014/06/inquiring-minds-sam-kean-neuroscience-patients

A nice movie about Kim Peek:

https://www.youtube.com/watch?v=k2T45r5G3kA

 

 

Getting a Sense of the Sixth Sense

“This “proprioception” is like the eyes of the body, the way the body sees itself.”

– Oliver Sacks in The Man Who Mistook His Wife For A Hat

Think about baseball. Right before the pitcher throws the ball, the ball and his hand are behind him, out of his sight. Yet, he knows where his hand and the ball are and how both are moving. How is this possible? The pitcher can tell where the ball is using his sixth sense. No, this is not the same sixth sense that the character played by Haley Joel Osment has in the movie The Sixth Sense. This sixth sense is known as proprioception (pronunciation: PRO-pree-o-SEP-shən). Proprioception is the sense that allows us to determine the relative position and movement of our body parts in space. So what do we know about proprioception? How does it work?

Image of Charles Sherrington who coined the term

The word, “proprioception” was coined by the British scientist Charles Sherrington in 1906. Sherrington identified two types of sensory neurons that innervate the muscle that are now known to underlie proprioception. These neurons are known as muscle spindles and Golgi tendon organs. Muscle spindles innervate muscle fibers and detect changes in muscle length while Golgi tendon organs innervate the junction between muscles and detect changes in muscle tone (the effort exerted by muscle). Unlike the other five senses, proprioception does not have a designated sensory organ; rather information is collected from the whole body. This information is sent up through the spinal cord to the cerebellum where the positions of body parts in space are calculated.

While the neurons involved in proprioception have long been identified, the molecular mechanisms underlying this sense are just beginning to be understood. In a recent paper, Woo et. al (2015) identify the mechanoreceptor Piezo2 as a mechanically-gated ion channel involved in translating muscle movement to electrical signals that are then transmitted to the central nervous system. When a muscle moves, the membrane of the neurons innervating it also moves, creating a mechanical force that opens these ion channels, allowing positively charged ions to flow into the neuron and thus creating an electrical impulse.

The muscle spindle and Golgi tendon organ are the proprioreceptors that detect and transmit changes in muscle length and tone to the rest of the nervous system (image source: http://www.medicalook.com/human_anatomy/organs/Proprioceptors.html)

The first evidence for Piezo2’s involvement in proprioception was its strong expression on muscle spindles and Golgi tendon organs in mice. The authors also demonstrate that the electrophysiological response properties of these neurons in response to mechanical stimulation resemble that of cells expressing the Piezo2 channel alone. Finally, the authors show that mice lacking Piezo2 in the proprioceptive neurons (conditional knockout mice) have severely impaired limb coordination, suggesting that Piezo2 is necessary for proprioception. [1] These findings have opened the door for scientists to further understand the molecular mechanisms that give rise to proprioception within the muscle spindles and Golgi tendon organs. However, it has to be noted that there is a lot more to proprioception beyond these two neuronal groups.

Longo and Haggard (2010) argue that while the cerebellum receives information on muscle stretch and tone from proprioceptive neurons, this information on its own is insufficient for the person to know where his body is in space. Information such as body size and shape are also crucial for this process. Longo and Haggard hypothesize that there must be a “stored body model” that the brain learns over time and is used as a reference map for information such as the length of your arm. [2] It is also suggested that the development of this map includes information not just from the proprioceptive neurons but also from other senses, primarily vision. [3] This could explain why a child learning the piano for the first time needs to look at his fingers to make sure he hits the right notes. However, a master pianist can play blindfolded and still know exactly where each finger is relative to the other.

Proprioception, the sixth sense, is a critical one. It enables many of the daily actions that we do without thinking. A loss of this proprioceptive sense is almost unimaginable to many of us, but it does happen. Damage to the nerves from injury or infection can lead to loss of this sense; affected people are unable to tell where their body parts are when they cannot see them (Ian Waterman is one such person). Therefore, developing a better understanding of how proprioception works and what the mechanisms involved are is vital. So, how are visual and proprioceptive inputs assimilated to create this “stored body model”? How do the molecular mechanisms underlying proprioception change as we develop this mental map of our body or as we learn to play the piano? Is Piezo2 expression or function involved in this? It will be interesting to see these questions answered as we get a better sense of proprioception.

Reference:

  1. Woo, S-H., Lukacs, V., de Nooij, J.C., Zaytseva, D., Criddle, C.R., Francisco, A., Jessel, T.M., Wilkinson, K.A., Patapoutian, A. (2015) Piezo2 is the principal mechanotransduction channel for proprioception Nature Neuroscience
  2. Longo, M.R., Haggard, P. (2010) An implicit body representation underlying human position sense PNAS Vol. 107, No. 26
  3. Blanke, O., Slater, M., Serino, A. (2015) Behavioral, neural and computational principles of bodily self-consciousness Neuron Vol. 88
  4. http://www.hhmi.org/biointeractive/ian-waterman-compensating-proprioceptive-loss

The battle of the sexes in biomedical research

Women’s rights activist Elizabeth Cady Stanton decreed that “all men and women are created equal,” a proclamation necessary to win women the right to equal representation in the eyes of the government. Ironically, I believe we now need to realize just how differently men and women are created in order to secure a new set of rights for women, that of equal representation in preclinical biomedical research. It isn’t news that males and females are biologically different (duh), but historically, scientists have conducted studies predominantly on male mice, presuming those discoveries would be applicable to both men and women; unsurprisingly, current research is pointing out the many flaws in this logic. In May of last year (2014), the National Institutes of Health (NIH), the agency that is charged with allocating government funds to research grants, announced policies that would require basic scientists to use both male and female mice in their studies.  The announcement of these policies included the following reasoning:

"The over-reliance on male animals and cells in preclinical research obscures key sex differences that could guide clinical studies. And it might be harmful: women experience higher rates of adverse drug reactions than men do. Furthermore, inadequate inclusion of female cells and animals in experiments and inadequate analysis of data by sex may well contribute to the troubling rise of irreproducibility in preclinical biomedical research, which the NIH is now actively working to address."

As with most changes in funding policies, there was pushback on the NIH’s new policy. Including both male and female mice would almost certainly increase variability, resulting in more time and resources spent on answering experimental questions. As a sixth year graduate student, I whole-heartedly agree that no scientist wants to see those two factors increase. One suggested solution was to increase allocated funds for studies specifically designed for “sex differences research.” This essentially would give scientists the allowance to presume what may or may not differ between the sexes. My personal experience over the past months calls bullshit.

I use mice as a model system to study cells of the sympathetic nervous system, which regulates the “fight or flight” response we experience when faced with a stress trigger (i.e. a big exam, public speaking, or a swerving car).  Specifically, I examine how signaling through a specific protein helps sympathetic neurons develop connections with another type of neuron. There’s no reason to think there would be sex differences in the basic circuit that I study—both males and females have these cells, both males and females develop the same inter-neuronal connection, and that connection functions similarly in both males and females  to control pupil dilation, increased sweating and increased heart rate, among other things. However, after a relatively long struggle with results that didn’t always add up, I discovered the missing piece of the puzzle: male and female mice have inherently different expression patterns of that specific protein in sympathetic neurons. Imminently, this means that I can move forward with my experiments and have cleaner data by examining results according to gender. But I’m intrigued—what does this mean more generally for how cells of the sympathetic nervous system connect to form circuits and thus function? And I’m hooked on the possibility that differences between the sexes may be more common than most scientists currently think.

Supporting my new take on sex differences is an interesting article on pain I recently came across in the journal Nature Neuroscience from Jeffrey Mogil’s lab.  For years, Mogil has called for equal representation of male and female mice in pain research, and his recent paper illustrates the point that males and females have differences even in unsuspected areas such as chronic pain. Essentially, the study demonstrates that drugs that work in male mice to help reduce hypersensitivity to pain do not reduce hypersensitivity to pain in female mice. Why?  Because the pain causing mechanism is fundamentally different between the sexes. This is an important finding because one of the drugs used in the study was actually used in a clinical trial for neuropathic pain—a clinical trial that included both men and women. This work underlines a major flaw in studying only one sex in basic science studies. Adding fuel to the painful fire is the fact that women make up the majority of chronic pain patients. How is studying only male mice fair to those women?

My own experience and current publications are pointing to the fact that we, as scientists, cannot predetermine which biological systems will show sex differences. It seems clear that preclinical studies need to do their due diligence and include both genders in order to provide information that would be relevant to both men and women. The new NIH policies may not be perfect in how they are requiring this, but it seems to me that every researcher should remember that sex matters, so that all men and women patients can be duly represented.

 

Sources:

http://www.nature.com/news/policy-nih-to-balance-sex-in-cell-and-animal-studies-1.15195

http://www.scientificamerican.com/article/testing-males-and-females-in-every-medical-experiment-is-a-bad-idea/

https://clinicaltrials.gov/ct2/show/NCT01869907

Welcome to the brain: ID access only

Just a 3-pound greyish mass, the brain may not look too intimidating. Yet scientists have puzzled for decades over how to dissect the tight network of the brain’s 100 billion connected neurons that at first all seem alike. Over the last century, the field of neuroscience has developed to investigate this intriguing subject, and within it different approaches have grown into distinct disciplines. These all similarly aim to unravel the networks underlying animal behavior, but do so using different techniques and methods. p68detail1

For example, one discipline that has been quickly expanding and receiving much attention from the media is connectomics. The aim of this field is to create detailed maps, or connectomes, of all synaptic connections among neurons. This stands in contrast to other more conventional approaches that interrogate specific pathways and networks of connections among a restricted group of neurons. The generation of these connectomes has been strongly pursued through the development of techniques for high-resolution, high-throughput reconstruction of small volumes of nervous tissue. For example, recent work by Kasthuri et al (2015) shows the potential of new electron microscopy (EM) techniques for expanding our understanding of the connections and structure of the brain. Unlike other forms of histology that rely on fluorescence to visualize cells, EM generates high-resolution images that reveal all membranous structures. Kasthuri and colleagues have imaged a square volume of brain slice after brain slice, and have then combined the slices and traced the cells across them, thereby reconstructing their full shape.

While connectomics is an exciting new approach to studying brain circuits, here I want to briefly discuss recent advances in another discipline of neuroscience. In order to make sense of the billions of neurons in the brain, scientists are classifying neurons into distinct types and subtypes. Knowing the identity of the different kinds of ‘players’ in the brain can be critically important to understanding the game, and a tool to interrogate the formation and function of networks.

The classification of neurons is generally based on genetic or proteic similarity. While it is very rare for one gene to alone give identity to a neuron, combinatorial expression patterns can often be used as markers to identify cells. This is a sort of bar code for each neuron, and scanning it would allow scientists to itemize the cell with others of its kind.

Most work on neuronal type classification has been done using antibodies (especially against transcription factors). For instance, there are extensive libraries of markers for cells in the different cortical layers. Recent work by Macosko et al (2015) at the Harvard Medical School has introduced a method for a more comprehensive classification of neurons. They used DropSeq, a new technique that allows one to read out sequences of RNA in single cells by trapping and rupturing the cells in nanodroplets, taking out all mRNA strands, and assigning to them a cell-specific bar-code. Each mRNA strand could then be sequenced and assigned to every cell it is expressed in. Knowing the complete gene expression profile of each cell allows comparison to other cells and clustering them by similarity!

Macosko and colleagues tested this new method on the retina. Here, the ingredients for the circuit have actually already been known for quite some time. For instance, different classes of interneurons (i.e. bipolar cells) synapse specifically onto separate types of projection neurons (i.e. retinal ganglion cells). Many of these types have previously been identified, and were matched to the clusters of cells found with DropSeq.

As we scramble to find the elaborate and specific genetic bar-codes that identify cell groups, we often aren’t even beginning to think of the role of these characteristic gene expression patterns in shaping the cell. Nevertheless genetic bar-codes have been useful because they allow to manipulate networks, and thereby provide a powerful strategy to probe their functions. Genetic signatures for cell groups have become a central tool in neuroscience. Ever-growing are the libraries of transgenic lines that allow for specific fluorescent protein expressions, as well as genetic manipulation (e.g. knock-outs) in select groups of neurons.

A huge concern when using these tools is the possibility that the markers are not reliable or specific enough to really study precise families or groups of neurons. In a comprehensive review of viral and transgenic reporters for adult-born neurons, Enikolopov et al (2015) admit to variability arising from the use of single gene promoters to drive expression of fluorescent proteins in separate cell type populations. For instance, some transgenic lines may not label all cells of a type, while others include cells that are morphologically and functionally dissimilar. Variations in timing and intensity of gene expression are only two of the possible explanations for this ‘leakiness’ in transgenic lines. While conceptually appealing, cell types aren’t as clearly defined as we’d like them to be. It is a slippery slope from here to saying that all cells are unique, and that it is their placement and role in a network that defines their function, not their genetic identity. And yet, gene expression does define a cell’s migration pattern, its morphological growth, and its excitability properties. Cell types may not be fully distinct, but genetically classifying neurons remains an incredible resource for understanding the structural and functional role of single cells in neural pathways.

References:

  • Enikolopov G, Overstreet-Wadiche L, Ge S (2015) Viral and transgenic reporters and genetic analysis of adult neurogenesis, Cold Spring Harb Perspect Biol, 7:a018804.
  • Kasthuri N, Hayworth KJ, Berger DR, Schalek RL, Conchello JA, Knowles-Barley S, Lee D, Vazquez-Reina A, Kaynig V, Jones TR, Roberts M, Morgan JL, Tapia JC, Seung HS, Roncal WG, Vogelstein JT, Burns R, Sussman DL, Priebe CE, Pfister H, Lichtman JW (2015) Saturated reconstruction of a volume of neocortex, Cell, 162, p 648-661.
  • Macosko EZ, Basu A, Satija R, Nemesh J, Shekhar K, Goldman M, Tirosh I, Bialas AR, Kamitaki N, Martersteck EM, Trombetta JJ, Weitz DA, Sanes JR, Shalek AK, Regev A, McCarroll SA (2015) Highly parallel genome-wide expression profiling of individual cells using nanoliter droplets, Cell, 161, p 1202-1214.

Nature, nurture and some randomness

Since Sir Francis Galton famously framed the “nature vs. nurture” debate in 1869, most scientists have thought that it is a combination of genetics (nature) and sensory experience (nurture) that guides how the brain wires to result in different behavioral outputs. The notion that nature and nurture might work together comes from studies done on monozygotic twins (almost identical genetic material) who were brought up separately often display more similar behavioral traits than dizygotic twins or siblings. The higher behavioral similarity would suggest that nature plays a role. However, the fact that there still are some behavioral differences remains at present the best evidence that nurture is important as well. Interestingly, a recent meta-analysis study published in Nature Genetics that looks at all twin studies performed in the past 50 years argues that the contributions of nature and nurture on behavior are pretty much 50-50 (Polderman et al. 2015). Taking these ideas to a more synaptic level, a large number of studies have identified a variety of molecules involved in the development of neuronal connections that function either in an activity-independent manner (nature) or in response to neuronal activity likely in the form of sensory stimulation (nurture). However, are these activity-independent molecules and neuronal activity the only two contributors to brain wiring? Could we imagine a scenario where both these factors are equal and yet only one connection has to be chosen? How would this occur? Could there be other factors perhaps like randomness?

In Neuron this past month, Owens et al. (2015) show evidence for randomness in the form of stochastic interaction between the activity-independent molecules and activity-dependent mechanisms of topographic map formation in the superior colliculus (the first brain area that neurons from the retina connect to). It has already been demonstrated that the projections from the retina to the superior colliculus are sculpted by both molecular cues (Ephrin and EphA receptor gradients) as well as neuronal activity (spontaneous activity waves). In mice where either ephrin signaling or spontaneous activity is blocked, the topographic map is disrupted. However, this strategy cannot tell us how the two factors interact to produce the final topographic map. In this paper, the authors use intrinsic signal optical imaging in transgenic mice where both ephrin signaling and activity are present but no longer acting in concert to investigate the relationship between molecular cues and activity. The authors demonstrate that there is a high level of heterogeneity in the organization of retinocollicular inputs in these mice that is best explained by a stochastic model where some connections are organized by molecular cues while some are organized by activity in a random manner.

While it is inevitable that there is some degree of randomness and noise in the nervous system, it is commonly believed that the nervous system works towards minimizing this randomness. Contrarily, the Owens et al. paper demonstrates a role for randomness in nervous system development. These results make me wonder what other processes may be driven in a stochastic manner and why this may be beneficial to us. A stochastic system might promote faster topographic establishment or perhaps allow for greater variation and adaptability. Nevertheless, it is interesting to think that perhaps our brain is nature, nurture and a little bit of random.

Reference:

  • Owens, M.T., Feldheim, D.A., Stryker, M.P., Triplett, J.W. (2015) Stochastic interaction between neural activity and molecular cues in the formation of topographic maps Neuron 87, 1261-1273
  • Polderman, T.J.C., Benyamin, B., de Leeum, C.A., Sullivan, P.F., van Bochoven, A. Visscher, P.M., Posthuman, D. (2015) Meta-analysis of the heritability of human traits based on fifty years of twin studies Nature Genetics Vol. 47 No. 7

Luck in Science

One of the most fascinating results in brain research - one that revolutionized neuroscience, launching it into the modern age - came from David Hubel and Torsten Wiesel in a series of papers in the early 1960’s. Hubel and Wiesel were awarded the Nobel Prize in 1981 for finding that the brain breaks down visual scenes into elementary components, dedicating networks of neurons to compute simple features that are eventually built up again into ever more complex representations.

As we wait to learn who this year’s Nobel Prize winners are, I can’t help but wonder how those women and men come upon their fascinating findings. From Galileo’s revolutionary work in astronomy to Oswald Avery’s determination of DNA as the molecule of heredity, the scientific method has been the unrivaled system of discovery in a world full of mysteries. While there is no doubt that the scientific method is still producing incredible new knowledge, it is not clear why some scientists are more successful than others at utilizing that method. If all scientists - supposedly intelligent, driven people - employ the same general strategy, why are some so much better at discovery than others?

In their account of their 25-year collaboration, Brain and Visual Perception, Hubel and Wiesel describe the chance situation that contributed to their initial result:

“Suddenly, just as we inserted one of our glass slides into the ophthalmoscope, the cell seemed to come to life and began to fire impulses like a machine gun. It took a while to discover that the firing had nothing to do with the small opaque spot [i.e. the intended stimulus]—the cell was responding to the fine moving shadow cast by the edge of the glass slide as we inserted it into the slot... People hearing the story of how we stumbled on orientation selectivity might conclude that the discovery was a matter of luck.”

 

My own experience in neurophysiology has produced some seemingly lucky results. As a technician in the lab of Tim Gardner at Boston University, I worked to develop a system to record the activity of large numbers of neurons in singing zebra finches using minimally invasive carbon fiber electrodes that promised to outperform traditional metal electrodes because of their small size and biocompatibility.

Our strategy seemed straightforward, but I kept running into a problem - after coating the carbon fibers with insulating plastic, the fibers’ electrical resistance went through the roof, making it practically impossible to see electrical activity from the neurons.

After taking several images of the fibers’ tips under an electron microscope, I realized that the problem lay in the way that I had been cutting the fibers: instead of a nice carbon core surrounded by a layer of plastic, like a pencil’s graphite cased in wood, the tips more closely resembled a chewed straw or gnarled tree, with the plastic practically swallowing the carbon in a frayed mess. The scissors I was using had been crushing my fibers, leaving almost no carbon surface exposed to record neural activity!

I needed a way to cut the fibers cleanly. This was a stage of wild exploration: the first idea featured a hacked hard-drive that was supposed to grind the plastic off the carbon tips; after that failed, I embedded the fibers in wax and cut them on a machine normally used to cut slices of brain (a distant cousin of the deli slicer). When that didn’t pan out, I nearly burned down the lab by trying to torch carbon tips that were just barely protruding from the same wax embedding I had used before; in the excitement of the prospect of success, I forgot that the wax was based in ethanol, and watched my latest idea go up in flames.

The torching wasn’t such a bad idea though - I simply needed to experiment using a non-flammable insulator. The answer was water. I poured a little bath for my carbon electrodes and immersed them in the water, with the tips sticking out above the water surface; I then ran my torch over the water and measured their resistance. The tips emerged clean, the resistance low and as an added bonus the tips had tapered to a fine point, making insertion into the brain much easier!

This modest scientific success seems to have come from a combination of perseverance and exploration, two seemingly contradictory tactics - a sort of focused play. Was it a matter of luck that I eventually stumbled onto a suitable method? Hubel and Wiesel analyze the matter of luck in their first success:

“While never denying the importance of luck, we would rather say that it was more a matter of bullheaded persistence, a refusal to give up when we seemed to be getting nowhere. If something is there and you try hard enough and long enough you may find it; without that persistence, you certainly won't. It would be more accurate to say that we would have been unlucky that day had we quit a few hours before we did... But just as important as stubbornness, in getting results, was almost certainly the simplicity, the looseness, of our methods of simulation.”

 

   

While it may be impossible to predict who will succeed in science or which experiments are going to be worthwhile, we shouldn’t rely on blind luck for success. Louis Pasteur wrote that chance favors the prepared mind. That adage might work not just for school examinations, but for the uncharted land of science as well. Perhaps what we can take from Hubel and Wiesel’s reflections is that chance favors the open yet persevering mind, those who ask interesting questions and don’t give up until the results are in hand.