In November, 2015, a group of coronavirus researchers produced a novel, laboratory-made, chimeric virus with the potential to infect human airway cells and cause SARS-like disease. The research elicited a brief moment of controversy that followed a familiar pattern. Critics claimed that the experiment’s primary contribution was to make the world a riskier place. Its defenders did not disagree, but argued that this was good because it sent an important message: nature too could cook up such a virus, and we ought to take notice and take action; and furthermore, the experiment would foster discussion about whether the research itself was acceptable—whether “building chimeric viruses based on circulating strains [is] too risky to pursue.”
This pattern has become well-worn over the last two decades. Virologists undertake risky, eye-brow raising, gain-of-function experiments whose utility is questionable, but whose effect is to produce controversy—and visibility. In virtually every case, researchers marshal the same justifications: the research can strike fear into the hearts of a complacent public about the threat of (naturally) emerging infectious diseases; more knowledge is always better for preparing for a (hypothetical) natural threat; and publishing the results, even when they might aid terrorists, will catalyze debate about whether such work should be permitted in the first place.
Controversy tends to fade quickly until it is revived by the next experiment, but even when research has produced significant public pushback, little has been done to alter a pattern in which scientists decide for themselves what risky research is warranted, and confront controversy (and the sort of celebrity that tends to accompany it) only after the work is done and disseminated. When outsiders have raised concerns, scientific experts have tended to dismiss them as expressions of ignorance. Ambivalence from within science tends to be suppressed lest it imply to outsiders that science cannot adequately regulate itself.
This pattern persists because for nearly a half-century, scientists have claimed—and jealously guarded— the sovereign authority to determine what research is warranted and how it should be undertaken. When the scientific community circles the wagons, it is more in response to the threat of oversight than to defend any particular body of research. As Science Magazine Editor-in-Chief Donald Kennedy wrote in a 2005 editorial decrying the prospect of a government body deciding whether he should publish data on the reconstructed 1918 Spanish influenza virus, “there is a real question of authority here,” and interference with science’s sovereign authority comes “better never than late.”
This is the scientific milieu within which the Wuhan Institute of Virology’s BSL4 laboratory was built—and within which the SARS-CoV-2 virus emerged. Was the virus that has wreaked such damage on humanity brought into being upon a laboratory bench? The simple fact of the matter is, the established norms of international scientific make this scenario plausible. Before the pandemic many researchers would have considered a gain-of-function experiment that created SARS-CoV-2 reasonable, even laudable; and no matter how robust the containment measures, BSL4 lab leaks are startlingly common.
Last March, with much of the world in lockdown, Nature Medicine added an editor’s note to the 2015 Coronavirus paper. “There is no evidence” to support “unverified theories that the novel coronavirus causing COVID-19 was engineered.” (One of the senior authors of that paper, Shi Zhengli now leads coronavirus research at the Wuhan Institute of Virology.) Although speculation about a laboratory origin remains just that, the patterns of normal science make clear that it cannot be ruled out. That it is even thinkable that a single mistake by an individual researcher could be the genesis of this global disaster demonstrates the urgency of a serious, public appraisal, not just of the risks of gain-of-function research, but of the notion and norms of scientific sovereignty that authorize them.
J. Benjamin Hurlbut, PhD is Associate Professor in the School of Life Sciences at Arizona State University. He is trained in science and technology studies (STS) with a focus on the history of the modern biomedical and life sciences, and his research lies at the intersection of intersection of STS, bioethics and political theory. He is the author of Experiments in Democracy: Human Embryo Research and the Politics of Bioethics (Columbia University Press, 2017) and co-editor of Perfecting Human Futures: Transhuman Visions and Technological Imaginations (Dordrecht: Springer, 2016), as well as numerous articles and book chapters.