Of (Lab) Mice and Men: When Cowboy Westerns Meet Science Research
“I buy three newspapers and each one of them has its own version of the truth. Where’s the real truth? You used to be able to get up in the morning, read Pravda, and know all you needed to know, understand everything you needed to understand (Aleksievich, 6).”
A few months after joining a research lab in college and reading through our previous publications, I noticed something funny: our journal articles all sounded like war novels. Though the subjects were proteins and biochemical pathways, the verbs that kept appearing were “rescue,” “suppress,” “derail,” and misfolded proteins were often referred to as “the culprits,” “tractable targets,” or “antagonists.” In one case, proteins that interacted with the one under investigation were referred to as the “supporting cast.” While I assumed my overactive imagination was projecting itself onto the pages, I apparently wasn’t the first to have this thought: a grad student eventually hung up a photoshopped image of a fly lassoing a protein while wearing a cowboy hat. And that, for me, was the end of scientific objectivity.
Of (Lab) Mice and Men
The greater the uncertainty in the world, the more appealing the solidity of facts is. Over the past few months, I’ve seen a surge in public support of science. Or at least, I’ve seen many protestors carrying signs with slogans like “believe science,” “no sides in science!,” or “alternative facts are imaginary!” I’ve also been at events where others have said, in disbelief, “How can people doubt facts?” As Robert Tracinski wrote in a recent commentary, “Science isn’t about ‘belief.’ It’s about facts, evidence, theories, experiments.”
While I personally support any efforts to preserve and regenerate the environment, I would never rest my argument on the solidity of science and its facts. Ironically, neither should many scientists: To start with, there is a serious reproducibility crisis in many fields of science, meaning that scientists often cannot replicate the work of other researchers or even their own previous results (https://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970). While this phenomenon is more prevalent in the life and social sciences than in physics and math, it still suggests a less than solid foundation in experimental science and data collection. Moreover, even when certain facts are agreed upon, decades can pass before it becomes clear that the original interpretation of the facts was incomplete or completely off target (https://www.advisory.com/daily-briefing/2019/03/26/alzheimer-drug).
Science, Shakespeare, and Subjectivity
And here, I offer my own point of view: Rather than standing behind the objectivity of science at all costs, I believe that science should be read the same way an English class would read Shakespeare or Nabokov.
When a novel is read, there is also a set of facts: the setting, the characters and their descriptive details, and the events and timeline that link everything together. Underlying all of these are metaphors, scope and lens, style, voice, and narration that may or may not be reliable. The ambiguity and richness of these stylistic components are what allow for so many interpretations of a play like Hamlet, endless commentaries and critiques of stories, and the intellectual fun of reading.
Scientific articles are equally complex and literary, but they are not treated that way. If you’ve ever seen an articled titled, “Scientists say more coffee prevents Alzheimer’s disease!” or really anything beginning “Science says…”, you’re looking at a very sloppy and misleading reading of data. On a larger scale, this trend is unconscious and dangerously unexamined. In our own laboratory in college, we were so busy championing our valiant hero (protein), Hsp104, that we rarely acknowledged what happened when we over-expressed it in our yeast cells: we were able to save them from destruction by prion proteins…until the overzealous Hsp104 itself killed them. Somehow, the results were still framed as “suppressions of toxicity” or “rescues.” It was like we were promoting our own little Vietnam War-style narratives in the petri dishes. But this is true of all scientific narratives: they tend to filter out of certain unnecessary details (“outliers”) or inconvenient results that do not serve them.
Probably the best example of an unexamined metaphor in science- and one with serious implications for researchers, patients, and society- involves cancer. Since President Nixon declared the “War on Cancer in 1971, the military metaphor has stuck. Cancer cells and tumors are seen as invaders that must be destroyed in a total war that plays out in the body of the patient. For the patient, a part of him or herself has suddenly gone rogue and violent, which is a frightening idea. If the patient does not win in combat against the cancer, he or she has “fought the good fight,” but “lost the battle.”
But what happens if a metaphor is reconsidered? What if cancer is framed not as a war, but as a communication problem? This is equally plausible, as cancer cells lose the ability to respond to stimuli that would otherwise induce them to die. Or what if cancer is seen as a disease of disorder, which can be reorganized (https://ase.tufts.edu/biology/labs/levin/publications/cancer.htm)? Or what if the approach to treating cancer were based on agricultural metaphors of integrated pest management (https://www.statnews.com/2018/06/27/cancer-treatment-prevention-evolution/)? The result is very creative science, with models that may or may not fully succeed, but are at least worth considering and testing out. The alternative, as demonstrated by the failure of the Amyloid Hypothesis in Alzheimer’s disease, is far worse. After decades of research and drug development, researchers, doctors, and drug companies are only now admitting that the consensus interpretation of the Alzheimer’s data, practically held as dogma, does not work.
Beyond metaphors in science, there are broader literary considerations: What is the narrative being told with the data and do the facts support it? How large or small is the frame of investigation? Is the narrator reliable? Or, a question that parallels one being asked in the humanities: Are there voices that are not being heard? In this case, unheard voices refer to researchers that cannot get their views in Cell or Nature, the Harvard and Yale of peer-reviewed journals, or cannot attract sufficient funding to pursue a novel idea. And the biggest question of all: are we even asking the right question??
After a few years spent pursuing science research, I have as much respect for scientists and for the scientific method as ever. What I don’t respect is mistaking a narrative for the facts. Even with facts confirmed, the metaphors, the assumptions, and the scope of the story should be examined. While this may seem overly complicated, there is always a reference point for our stories: experience. If our narrative says the environment is healthy, then why is it covered in trash and chemicals? If we say we’re winning the war on cancer, does it feel that way to a cancer patient?
Which brings me to a final comparison between science and literature: Both ask us to read them, refine our interpretations over and over again as we gain new experiences and context, and to never get stuck in a single way of reading.
Aleksievich, Svetlana, and Bela Shaykewich. Secondhand Time: the Last of the Soviets. The Text Publishing Company, 2016.