by Edgar Garcia-Rill
The Aim of Science
Numerous journals published each month contain hundreds of articles addressing diseases, clinical care, and novel therapies. This is a persuasive argument in favor of brain research. Not only do neuroscientists feel overwhelmed by the proliferation of their own literature, but also the sheer number of “breakthroughs” published adds to the inordinate weight of the competition. We should remember that most theories are actually proven wrong, and that is “business as usual” in science. Considerable patience is needed to ensure that “breakthroughs” are properly replicated, validated, and accepted.
Science, after all, is the search for better answers, not absolute truth. The aim of science is to achieve better and better explanations. Sir Karl Popper proposed that a hypothesis can only be empirically tested but never proven absolutely true, and that it can only be advanced after it has been tested 1. It is perfectly acceptable for a scientist to be wrong, as long as he is honestly wrong — that is, as long as he or she designed and performed the experiment honestly.
Popper also advanced the concept of falsifiability. The honest scientist should apply this concept to his or her own theories and research findings. He or she should be the best critic by probing weaknesses so that, by surviving withering criticism from the one scientist with the greatest familiarity with the experiment, the hypothesis can come closer to the truth. However, few scientists actually throw down the gauntlet in such research. Many of them defend their work with desperation and viciously criticize opposing theories.Some even censor the work of opponents by rejecting their manuscripts or grant applications. Logic would demand that a scientist should strive to prove his or her own work false before someone else does, but that feat is difficult to accomplish during the typical 3- to 5-year period of a grant award. In other words, the funding granted for an idea requires supporting evidence and success so that a grant may be renewed for another similar span of time. Few “breakthroughs” can be proven correct (or incorrect) in such a short period; thus, the argument goes, more studies and a further funding period- are often needed. Review committees face the task of precluding applicants from overselling their work. Generally, reviewers actually agree on the quality of an application; however, they tend to shred weak applications although in some instances unworthy grants get funded anyway. Conversely, due to the shortage of funds, many worthy projects instead go unfunded.
Sometimes, a novel technique has excellent “wow” value and yet can hide weaknesses. These flaws may take time to be exposed, especially when reviewers jumping on the bandwagon defend it out of self-service. Some “exciting” methods can be adopted wholesale by an entire field without due consideration for proper controls. On occasion, the individual is so well respected that mere reputation can hide minor weaknesses. There is also the “halo” effect from being in a top 20 medical school, an effect that can provide enough of a nudge to get an award funded, although it may not be better than one dredged from the backwaters of science. The question is this: will any of those awards lead to a major breakthrough, a new cure, or a novel effective therapy? The answer is that we do not know. But we do know that only a very few will provide a significant advance, but if we do not fund the research, we relegate our lives to the status quo with no options for the future.
So how can we determine which science to fund? How can we be certain which discovery is closer to the truth? How can we identify the finding that will lead to a new cure? We can design ways to do all these things better, but never with absolute certainty. A good starting point is the realization that we can be “snowed,” at least for a while.
Famous Neuroscientific Theories
The “Blank Slate” theory proposed by thinkers from Aristotle to St. Thomas Aquinas to Locke suggested that everything we know comes from experience and education, nothing from instinct or natural predisposition. Many of the proponents can be forgiven for advancing a “nurture or bust” philosophy since genetics was not in their lexicon. That is, they had incomplete knowledge. An avalanche of data has shown that many traits are inherited along with many instincts, the “nature” argument, we know, is not exclusive of nurture.
At the beginning of the 19th century, “Phrenology” proposed that specific traits could be localized to distinct regions of the skull overlying the brain, creating detailed cortical maps of these functions. These advocates exceeded the available data, and in many cases used the process for ulterior motives, including racism, to spread their influence. By the 20th century, such pinpoint assignations of skull regions had been discredited.
Another fallacy is that people “use only 10% of their brains,” an assumption deriving from a misunderstanding of studies of sensory-evoked responses in which “primary” afferent input (e.g. vision, touch, hearing) only activates a small percentage of the cortex. This result sidesteps the fact that most cortex is devoted to association functions that process such information both serially and in parallel. Embedded in this conclusion is the fact that neurons need to fire, otherwise their influence on their targets is lessened. Without reinforcement, synapses weaken, almost as if the input was “forgotten.” “Use it or lose it” is the principle of brain activity. What this means is that our brain is continuously active — all of it.
Contrary to what many researchers espouse, many drugs shown to be efficacious in animals manifest limited effectiveness in humans 2. In fact, the sensitivity for most drugs tested on animals has the probability of only a coin toss (~50%) that it will be effective in man. Unfortunately, the opposite can also be true. Thalidomide, a drug tested in more than 10 species, hardly ever produced birth defects, except in humans 3.
Many of these theories were not disproven because of scientific fraud or faulty experiments. Most were the result of incomplete knowledge, which includes the common problems of study size, limited technology, etc. We maintain that it is also the inadequate application of falsifiability by the proponents that might have prevented some of these spectacular failures. This incomplete knowledge and inadequate application of falsifiability point to neuroscientists. They are responsible for these failures, failures which could have been avoided if falsifiability had been practiced.
Mixed in with such famous failures are a number of sophisticated and stunning discoveries about the brain. At the turn of the 20th century, Ramon y Cajal observed that the nervous system consists of individual cells, not a continuous network as was the thought of the time. In contrast, Cajal’s work led to the description of the synapse and chemical transmission across the narrow clefts between neurons. This description then led to the identification of a myriad of transmitters — some of which could alter behavior –, followed by the development of psychoactive drugs that modulate mood, movement, and other functions. Pharmacological intervention soon allowed many patients to live outside an institution, eliminating the need for padded rooms and “lunatic asylums.”
About 30 years ago, it was thought that humans were born with all the cells we will ever have. It now seems that we lose cells throughout puberty, but we find the occurrence of neurogenesis in the adult. The creation of new brain cells, a totally foreign concept until recently, is now accepted wisdom. How to control such generation is the topic of study in a number of neurodegenerative disorders.
In science, certain simple conclusions can have an unintended impact. The conclusion that Benjamin Libet reached in his 1980s experiments on the Readiness Potential is one example. Because the “will” to perform a movement appeared to occur before the actual movement, and the person was not aware of this intention, Libet concluded that we perform movements through “unconscious” processes. Unfortunately, this conclusion led to another conclusion – a disturbing one — that our subconscious was responsible for our voluntary actions. By extension, this meant that there was no free will. The implications for personal responsibility carried unwanted effects, including advancing legal arguments absolving miscreants of culpability. However, in Libet’s work, the person studied was fully conscious, not unconscious. Moreover, while awake, we are aware of our environment as we navigate it, even though we do not expressly attend to any particular event. In other words, we are aware of cars and pedestrians as we carry out a conversation by often moving to avoid collisions. In fact, we are “pre-consciously” aware of the world around us and respond appropriately, although we do not attend to a particular event. This interpretation makes it clear that we are indeed responsible for our actions, for our voluntary movements. However, it is also clear that the perception of that environment, whether pre-consciously or consciously, is altered in mental disease. Disorders like psychosis can dramatically alter these perceptions and thus guide our actions without responsibility. That is why proper diagnosis of mental disease is essential.
It is inarguable that brain research has led to remarkable improvements in health and quality of life. The rather modest investment in funding targeting the brain has paid off exponentially. While the National Institutes of Health are funded to the tune of ~$40 billion yearly for research from cancer to heart to brain, spending for defense research is more than 10 times greater. While scientific review committees discuss, dissect, and agonize over a $1 million grant application for almost one hour, Congress makes billion-dollar defense funding decisions in minutes. We should realize that the successes in brain research will far outweigh the failures, but we should also know that only some of those successes will result in a novel treatment.
In addition, the annual recurring costs of most brain diseases in terms of medical costs, lost income, and care is in the billions of dollars. One novel treatment for a disease that was derived from a typical $5-10 million research program will save billions of dollars every year. We know that for every dollar spent on research, we stand to save thousands every year, and conversely, for every dollar we do not spend on research, we stand to pay thousands every year from now on.
One of the most appealing techniques in medicine is magnetic resonance imaging (MRI), which employs strong magnetic fields stimulated by radio waves to produce field gradients that are then computed into images of the brain. Functional MRI (fMRI) uses blood oxygenation levels to compute images that are assumed to reflect neural activity. The standard black-and-white displays allow the clinician to detect and measure tumors, infarcts, and even infection, as well as bone, fat, and blood. This technique has been a life-saver for a number of disorders in which clear, detailed, and accurate anatomical images are required. With the advent of more sophisticated MRI computation and fMRI, the displays have become color coded, so that changes in blood oxygenation are displayed in beautiful false color images. This characteristic allows proponents of the technique to oversell their product.
Today, fMRI is being used in research to make unwarranted conclusions about the workings of the brain on a real-time basis. Researchers undertake studies from voluntary movement to sensory perception to the performance of complex tasks. Some labs have “concluded” that they can detect truth-telling from lying, and have pitched for fMRI as a lie detector. The issues related to the technology will not be repeated here, merely to emphasize that the field has been remiss in standardizing the generation of images. This has created a field in which the same experiment carried out by different labs produces different images and, therefore, conclusions. The method suffers from a complexity requiring recurring individual decisions regarding the weights of factors which are applied differentially by researchers at multiple stages in the processing of that image. It is incumbent on researchers in the field to develop standardized methodology. Perhaps the most serious problem is that the technique actually measures blood flow, not neural activity. The pretty images represent the aftermath of brain processes that included both excitation and inhibition. The fMRI is essentially a static image of an ongoing complex event, much like taking a picture of an orchestra and, from the frozen positions of the players, making conclusions about the identity of the musical piece being played.
Granted this illustration may be an exaggeration, but the fact remains that the mesmerizing effect of the images hide the fact that they are based on moving processes founded on assumptions about how the brain works. Moreover, overselling of the technology has accumulated an undeserved portion of the funding pie. Many have naively moved to the technology without developing testable hypotheses and controllable experiments. The monies for the BRAIN Initiative, a monopoly of funding for the method, has been hijacked to the detriment of other valuable technologies. It is hoped that the limitations of the method will be exposed so that those using more esoteric variables can better justify their decisions. Currently, the value of the technology to the clinical enterprise is without question, but when complex neural processes are studied using what is a measure of blood flow, the conclusions drawn can indeed be questioned.
One policy issue that emerges is the following: what is the harm in throwing money at the problem? Why not overfund a research area until all the problems are worked out? The answer – these practices are unrealistic. A similar situation arose when agencies began pouring money into AIDS research. Funding levels that historically had funded between 5% and 15% of submitted AIDS grant applications- rose to allowable funding of 20% to 25% of applicants. The pressure increased to make breakthroughs, and “discoveries” came hard and fast, with seemingly rapid progress towards systematically resolving the problems of a complicated infectious process. Responsible labs were soon confronted with the realization of improperly controlled “discoveries.” These labs began spending resources and time on validating questionable results and unsupported theories. Some were forced to attempt to replicate many such findings in order to move the field ahead, if at all. This consequence led to overfunding from which the field suffered.
Another technique with which the public at large, including attorneys, is enamored is genetics. This powerful array of methods has exceptional promise. As future clinical tools, personalized medicine stands to provide answers to a host of medical questions and may even give us some cures. But there is the issue of genes and determinism. That is, genes are not deterministic, but very malleable, likely to produce different proteins under slight changes in condition. In addition, genes are co-dependent, such that the expression of some genes is not only dependent on other nearby genes (in terms of chromosome location), but also on some distant genes.
The field of genetics promises to address the links between genes, the brain, behavior, and neurological and psychiatric diseases. Therefore, neurogenetics holds great promise for the future of clinical science, but it also has created a gap. This promise has attracted the bulk of funding for genetic studies, pushing the testing of treatments and cures — that is, translational research — further into the future, resulting in a gap between patients who need to be treated now and those who may be successfully treated with a genetic intervention in 20 or 30 years.
Research grants have migrated away from clinical studies towards molecular studies. Because of the complexity of the genome, short-term answers are unlikely. Premature genetic interventions could be catastrophic, but the power of the technology has moved funding away from interventions in the clinic. Translational neuroscience is designed to bring basic science findings promptly to the clinic 4. It is a response to an Institute of Medicine report from 2003, calling for more emphasis in this area 5. The reason for the concern voiced in the report was the gradual decrease in research grants awarded to MDs (presumably doing research on patients) compared to PhDs (presumably doing research on animals). Over a 10-year period, the percentage of MDs with awards had decreased from 20% to 4%. While some attention has been paid to increasing translational research funding, the fact is that most grant reviewers are basic scientists and not very familiar with clinical testing and human subject research. Animal studies are more easily controlled than human subject studies, so that there is an inherent difference that makes for lower funding scores for human studies. It is incumbent on the research community to correct the discrepancy because we stand to lose public trust. We now live in a world of immediate gratification and cures far off in the future will not be warmly considered.
Is the emphasis on genetics and molecular biology truly warranted? Definitely, but not at the expense of advances that could improve the quality of life of patients now rather than later. Some self-scrutiny is called for from the molecular biology community. For example, researchers should realistically identify some of the limits of their own technology. One area that needs such scrutiny is the knock-out mouse, in which a genetically modified mouse has undergone a process whereby a gene is kept from expressing or deleted from the mouse’s genome. Knocking out the activity of a gene provides knowledge about the function of that gene, making for a marvelous model for the study of disease. The process is complex, certainly cutting-edge, and very effective if properly employed. Because of the variety of genes, the technology has created an opportunity for many labs to develop their own knock-out mouse, thus leading to a myriad of new genetically modified mouse lines that researchers can make, buy, study, and manipulate.
The scientists who developed the technology won the Nobel Prize for Physiology or Medicine in 2007. The knock-out technology has to date been most successful in identifying genes related to cancer biology. These genetically altered animals allow the study of genes in vivo, along with their responses to drugs. The problem, however, has been the inability to generate animals that faithfully recapitulate the disease in man. This glaring factor is understated- to the detriment of all. In addition to the glaring fact that ~15% of knock-outs are lethal and some fail to produce observable changes, there is also the overlooked fact that knocking out a gene will up-regulate many other genes and down-regulate another large group of genes 6. In nature, single-gene mutations that survive are very rare, so the knock-out is not simply a study of such mutations; it is an attempt at learning all that a single gene does. The problem is that, without knowing which OTHER genes are up- or down-regulated, the knock-out animal represents an uncontrolled experiment on a creature that never would have existed in nature. It is incumbent on representatives in the field to discuss these factors and adequately control their studies.
Some of these problems can be overcome by using conditional mutations, in which an agent added to the diet can induce a gene to be expressed or cease expressing temporarily. The problem is that this approach does not control the up- or down-regulation of linked genes whose identities are unknown. Moreover, none of these methods measure compensation. Very few researchers verify how the absence of the gene creates compensation in expression of other genes. For example, knocking out the gap junction protein connexin 36 creates a mouse without connexin 36, but the manipulation leads to overexpression of other connexins 7. The field of knock-out mouse lines is expanding, growingly uncontrolled, and funded well beyond its current scientific affirmation.
A final issue is that, as far as the brain is concerned, protein transcription is a long-term process. That is, the workings of the brain are in the millisecond range. Over the last several minutes during the reading of this article, transcription was irrelevant. None of the perceptive, attentional, or comprehensive elements of the information on these pages required gene transcription. Of course, the long-term storage of the information into memory requires gene transcription, but not before. Our brain takes about 200 miliseconds to consciously perceive a stimulus. Gene action is in the order of minutes to hours, which is not in the same scale in terms of time. Gene transcription is irrelevant during a conversation with friends about the latest news. Assessing thought and movement in real time is too fast for genetic methods, but not for two technologies, the electroencephalogram (EEG) and the magnetoencephalogram (MEG).
The EEG amplifies electrical signals from the underlying cortex (just from the surface of the brain, not from deep structures), but these signals are distorted by skull and scalp. The MEG measures the magnetic fields of these electrical signals, but requires isolation by recording rooms, massive computational power, and superconductors that function in liquid helium. The development of helium-free MEGs is here; such development would make the technology less expensive to operate. The MEG is also very useful in producing exquisite localization of epileptic tissue, especially the initial ictal (seizure activity) event. As such, it is reimbursable for diagnostic and surgical uses. As the only real-time localizable measure of brain activity, the MEG is likely to make inroads into the rapid events in the brain. Recent reports suggest that the MEG may also provide detailed images of any part of the body, including functioning muscle.
To gain perspective, we have to understand the battle within the brain sciences that has led to the current state. Subsequent to Sir Isaac Newton’s deterministic theories of how the world worked, there arose the idea that the brain worked the same way. That is, – all brain function could be reduced to the smallest physical components, to the ultimate in micro-determinism. This approach was manifested in the brain sciences in the form of “behaviorism,”an idea that all actions and thoughts were due to the physicochemical nature of the brain. A major proponent was B. F. Skinner, who considered free will to be an illusion, and that everything you did depended on previous actions. Advances in molecular biology and the structure of DNA fanned the fervor for this view. There was no room for the consideration of consciousness or subjective states. This was the world of the reductive micro-deterministic view of the person and the world. These views influenced education and policy, suggesting that the issue was not to free man but to improve the way he is controlled — the “behaviorist”, one-way reductionist, view of the world in general and of the brain in particular.
The implication of “behaviorism” for thought and action was that consciousness was an epiphenomenon of brain activity- and that the reductionist approach, if only enough details were known, provided a complete explanation of the material world. This deterministic view of the world began to crumble under the weight of the discoveries of quantum mechanics. The old deterministic idea of behavior and absence of free will were undermined by the advances in quantum mechanics. Behaviorism thus was replaced by a “cognitive revolution” that espoused mental states as dynamic emergent properties of brain activity. That is, a two-way street existed between consciousness and the brain, fused to the brain activity of which it is an emergent property. This is not to imply a dualism, two independent realms. Rather, mental states are fused with the brain processes that generate them. This approach eliminated the duality of “brain” versus “soul” or “mind.”-. Just as evolution undermined the tenets of creationism, the cognitive revolution dissipated the suspicion of a separate “soul.”
The “cognitive revolution” implied a causal control of brain states downwards as well as upward determinism. This two-way approach offers a solution of the free will versus- determinism paradox. This cognitive approach retains both free will and determinism, integrated in a manner that provides moral responsibility 8. As a current scientific mainstream opinion, volition remains causally determined but no longer subject to strict physicochemical laws. It is one that no longer considers there to be a mind-versus-brain paradox, but a singular, functionally interactive process. Instead of placing the “mind” within physicochemical processes, thought became an emergent property of brain processes.
To use a simplistic parallel, the brain is to thought and action as the orchestra is to music. Thought and action are emergent properties of the brain just as music is an emergent property of the orchestra. Music cannot exist without the orchestra, just as thought and action cannot exist without the brain. This is a solitary relationship, one in which brain states influence thought and action (downwards), and the external world modulates the activity of the brain (upwards). The “mind” or consciousness can be viewed as downward control of a system changing due to continuously impinging external inputs.
This new viewpoint combines bottom-up determinism with top-down mental causation, the best of both worlds. The world of reductionism is not entirely rejected, merely considered not to contain all the answers, and an entirely new outlook on nature is manifest. The revolution in neuroscience has provided new values, whereby the world is driven not just by mindless physical forces, but also mental human values.
 Karl R. Popper. 1983. “Realism and the Aim of Science.” In Postscript to the Logic of Scientific Discovery. W.W. Bartley III ed. New York: Routledge.
2 R. Heywood. 1990. “Clinical Toxicity – Could it have been predicted? Post-marketing experience.” In Animal Toxicity Studies: Their Relevance for Man. C.E. Lumley, and S. Walker, eds. Lancaster: Quay.
3 Niall Shanks, Ray Greek, and Jean Greek. 2009. “Are animal models predictive of humans?” Philos. Ethics Humani. Med. 4: 2.
4 E. Garcia-Rill. 2012. Translational Neuroscience: a guide to a successful program. New York: Wiley-Blackwell.
5 Kohn, L.T., ed. 2004. Committee on the role of Academic Health Centers in the 21st Century, Academic Health Centers; Leading change in the 21st century. Washington, DC: National Academies Press.
6 D.A. Iacobas, E. Scemes, and D.C. Spray. 2004. “Gene expression alterations in connexin null mice extend beyond gap junctions.” Neurochem. Int. 45: 243-250.
7 D.C. Spray, and D.A. Iacobas. 2007. “Organizational principles of the connexin-related brain transcriptome.” J. Memb. Biol. 218: 39-47.
8 Roger Sperry. 1976. Changing concepts of consciousness and free will. Perspect. Biol. Med. 20: 9-19.
Edgar Garcia-Rill, Ph.D., is the Director for the Center for Translational Neuroscience and a professor in the Department of Neurobiology and Developmental Sciences at UAMS.