Spying on proteins - how observation can affect the perception of reality


  • 6 minute read
  • Cell models

One of the highest ideals in science is to observe natural events in their native context. Doing so is a constant challenge thanks to the Observer Effect, described by Heisenberg and others, where the act of observing or measuring a process alters it. Thus, scientists of all stripes try to get out of the way, attempting to produce the most accurate measurements possible using specific yet unobtrusive tools. Wildlife photographers use long-range lenses to avoid the need to stand directly in front of a herd of water buffalo and thereby affect the animals behavior. Psychologists create tests where the subjects are unaware of the true intent so as to minimize changes in natural responses.

The same happens in cell biology. Processing samples, (e.g., fixation) alters the phenomena being observed, and certain reagents can give false negative or positive results. In addition, many assays provide static readouts, snapshots that only offer insight into what was happening at the moment of acquisition. With this unfortunate but unavoidable reality, researchers are constantly in need of technology that will minimize disruption to biological systems while also returning highly accurate data.

Drawbacks to Traditional Methods of Protein Visualization

One area of cell and molecular biology in which we see a push for more consistent and higher-resolution tools is in protein expression. As a gross oversimplification, the challenge is to localize the ~20,000 human proteins within cells in order to understand their functions and interactions. The reality, of course, is far more difficult. It is not just 20,000 proteins in a cell, but different combinations of these proteins in many different cell types at various stages of the cell cycle in both normal and perturbed states. On top of those variables, localization and quantification of expression must be accomplished at ever-increasing levels of spatial and temporal resolution.

There are several traditional tools available to researchers for protein localization. However, like anything, there are limits and drawbacks to them all. Here is a brief summary of the issues:

Fluorescent protein vectors:

This method is well established and relatively convenient. However, it does require molecular biology capabilities. Cassette design and cloning has become a far more efficient process in recent years with better molecular design software and new DNA assembly protocols. Still, it is a time-consuming, multi-step process with many potential entry points for mistakes to creep in. Additionally, the results tend to be highly variable.

Overexpression systems:

Closely related to the previous section and the variability noted therein. Once a vector has been produced, it must be cloned in to the appropriate cellular system. Traditionally, this has been accomplished by transfecting the recipient cells with purified vector, with the fluorescently-tagged gene of interest under the control of a powerful exogenous promoter. As a result, expression of the protein is uncoupled from normal cellular processes. This in turn means that excessive amounts of protein can lead to artefacts in fluorescence microscopy, such as ectopic sub-cellular localization and fluorescently brilliant but biologically irrelevant protein complexes and inclusion bodies.

Conventional gene tagging:

Genome editing technology has allowed for this significant improvement in recent years. We can now drop the sequence of a fluorescent protein into the genome where it will be expressed at the N- or C-terminus of the protein of interest. By putting the FP under control of the native promoter and in a normal genomic context, problems with overexpression systems can be largely eliminated. However, in diploid cell lines, not every copy of the gene is tagged. Once again, quantification of protein expression is subject to error because untagged (and therefore "invisible") versions of a protein may be present.

Organelle specific dyes:

Many organelle specific dyes can be used in live cells and are activated by the internal environment. Conditions such as the pH level cause the dye to emit a fluorescent signal. However, the internal organelle environment is not always optimal for this activation process, especially during disease states.

Antibodies:

The first problem with antibodies is that they are not compatible with live-cell assays. In theory, though, they represent a perfect option for localizing proteins in fixed samples because they should target a specific antigen and are easy to detect through fluorescent tags or chemical reactions. Unfortunately, biology never cooperates with "theoretically perfect." Antibodies are and will remain an invaluable tool for researchers, but the field of antibody production is plagued with problems. Prior to use of every new antibody, protocols must be optimized to determine the optimal window for sensitivity without increasing background noise. Additionally, that idealized specificity is often missing. Most antibodies are produced by animals immunized with the intended target antigen. This natural bioreactor leads to production of a heterogeneous antibody population that may target multiple sites on the target antigen and cross-react with other molecules. The antibody production industry has historically lacked stringent validation protocols for new products, leading to significant concerns about specificity and, therefore, off-target labeling and inaccurate results.

Reviewing the Problem of Un-validated Antibodies

It is those inaccurate results that are causing great concern from within (and, more and more, from without) the scientific community. In October, 2015, Nature began a special issue called Challenges in Irreproducible Research that has collected dozens of articles and editorials published over the past few years. One of these is a survey published 25 May, 2016 that sheds light on the "crisis" rocking research. Briefly, almost 80% of biologists have experienced trying to reproduce another researcher's results and being unable to do so. Furthermore, 60% have had the same problem with their own work. (The numbers for research as a whole are ~70% and >50%, respectively.)

The issue has numerous causes, but antibodies are at the centre of the storm. While some antibody producers do utilize rigorous validation, too many others do not, or at least are not transparent about the process used to test a new reagent. For example, one can easily find examples of antibodies recommended for use in western blog, IHC, IF, and ELISA across multiple species, where only a single western blot is shown as proof that the product works as advertised. With the inherent risk of off-target binding, variability between batches, and inconsistent performance between different applications, this lack of clarity only exacerbates the problem of reproducibility.

Finally, it should be noted that scientists are not innocent, either. Researchers will sometimes go ahead and use an antibody that has not been indicated for a particular application. Additionally, researchers tend to assume that commercially available antibodies have been thoroughly validated by the manufacturers and don't include in-house validation as part of their workflows. David Rimm, a pathologist at Yale University, explained this side of the equation in yet another Nature article from May 2015:

"Most scientists who purchase antibodies believe the label printed on the vial, says Rimm." As a pathologist, I wasn't trained that you had to validate antibodies; I was just trained that you ordered them.

All of this adds up to massive waste in terms of time, money and scientific advances. According to Monya Baker in the article Antibody Anarchy,

A 2015 report from online purchasing portal Biocompare puts the market for research antibodies at US$2.5 billion a year and growing. Losses from purchasing poorly characterized antibodies have been estimated at $800 million per year, not counting the impact of false conclusions, uninterpretable (or misinterpreted) experiments, wasted patient samples and fruitless research time.*

What is To Be Done?

Above, we have summarized a growing body of work describing the issue of poorly characterized antibodies in basic research. We have defined the problem, so what is the solution?

To be useful an antibody must:

  • Be specific
  • Have a high signal to noise ratio
  • Be validated for the assay at hand

Read more in our next blog article, coming soon, that discusses some of the great solutions that are coming out of recent advances using CRISPR CAS technology, to ensure that protein visualization methods give more physiological results.

*Note: The Baker article stated the estimated cost of bad antibodies at $800B. However, the article cited listed that number as $350B. Furthermore, the latter article did not provide a methodology for this estimation, so caution is warranted.