Playing with Viral Fire: The Risks of Gain-of-Function Research, By Brian Simpson
Recent disclosures from internal U.S. government documents have revived an uncomfortable question about modern virology: why are scientists deliberately attempting to make animal viruses more capable of infecting humans?
Documents released through freedom-of-information requests suggest that researchers supported by U.S. government funding proposed experiments on bat coronaviruses aimed at testing how these viruses might adapt to human biology. The proposals involved modifying viral spike proteins to determine whether they could bind more tightly to human cellular receptors. Researchers also explored altering viral cleavage sites — molecular triggers that allow viruses to unlock and enter host cells more efficiently.
In plainer terms, the research sought to explore how animal viruses could be made more infectious to human beings.
This category of work is generally known as gain-of-function research. The basic idea is that by deliberately enhancing certain properties of pathogens — such as transmissibility or host range — scientists may learn how future pandemics could arise in nature. By studying more dangerous versions of viruses in the laboratory, researchers argue that they can anticipate emerging threats and develop vaccines or countermeasures in advance.
On paper this sounds like preventative science. In practice it begins to resemble biological roulette.
The central problem lies in a quiet assumption underlying much of this research: the belief that genetic manipulation produces predictable outcomes. In laboratory descriptions, viral genomes often appear as tidy strings of code in which particular traits can be switched on or off through carefully designed alterations. The metaphor resembles engineering: genes are treated like components in a machine that can be adjusted individually while the rest of the system remains stable.
But biological systems do not behave like machines. They behave more like tangled ecosystems of interacting processes. A single genetic modification may influence multiple biological pathways simultaneously, and the consequences are often impossible to foresee in advance. This is especially true of viruses, which are among the most rapidly evolving entities in nature.
Coronaviruses in particular mutate readily and recombine with related strains. Small alterations in their genomes can produce effects that ripple through viral replication, host interaction, immune evasion, and transmissibility. Once researchers begin modifying viral genomes in pursuit of particular traits, such as stronger binding to human receptors, they are no longer adjusting a simple dial. They are entering a complex evolutionary landscape where unintended outcomes are almost inevitable.
Nor does the unpredictability end once the experiment is completed. Viruses do not remain genetically static simply because they were engineered in a laboratory. As they replicate, they generate populations of slightly different variants. What researchers call "a virus" is usually a swarm of evolving genetic forms rather than a single stable entity.
This means that an engineered virus may quickly produce variants with properties that were never part of the original design. A modification intended merely to study receptor binding might interact with other mutations to produce changes in transmissibility or virulence that no one anticipated.
In other words, once gain-of-function work begins, the experiment does not remain neatly confined to the original hypothesis.
Even if such work could be performed safely in theory, it depends heavily on the reliability of laboratory containment. Advocates often emphasise strict biosafety procedures designed to prevent accidental release of dangerous pathogens. Yet history suggests that laboratories are not immune to human error.
Pathogens have escaped research facilities repeatedly over the past several decades. Incidents involving SARS, influenza, and even smallpox have been documented in countries with sophisticated biosafety systems. Most of these events were contained before wider outbreaks occurred, but they reveal a fundamental truth: human institutions are fallible.
Laboratories depend on people following procedures, maintaining equipment, and exercising constant vigilance. Over time mistakes accumulate. Someone mislabels a sample, a safety protocol is skipped, a mechanical failure goes unnoticed. In ordinary laboratory work such errors are inconvenient. When the organism involved is a virus deliberately engineered to infect human cells more efficiently, the consequences could be far more serious.
The result is a stark imbalance between potential benefits and potential costs. Supporters of gain-of-function research argue that studying enhanced viruses allows scientists to anticipate pandemic threats before they appear in nature. Yet the predictive value of such experiments remains uncertain. Viral emergence in the real world is shaped by ecological interactions among wildlife, livestock, and human populations — factors that laboratory models capture only imperfectly.
By contrast, the risks are tangible. A single laboratory escape involving a modified pathogen could trigger an outbreak far beyond the control of the original researchers. The social and economic devastation produced by a global pandemic has already been demonstrated in living memory. Even a low probability of such an event becomes alarming when the consequences could affect billions of people.
Behind the scientific debate lies a broader cultural issue within modern research institutions. Science today operates within competitive structures that reward novelty, funding success, and high-impact publications. Ambitious experiments attract attention and grants. Projects that push technical boundaries are often viewed as signs of innovation.
These incentives do not necessarily align with caution.
Most researchers involved in controversial experiments are undoubtedly sincere in their belief that the work serves the public good. Yet the institutional environment encourages risk-taking while distributing the potential costs across society as a whole. When the benefits accrue to research careers and the risks are borne globally, the incentives become skewed.
Technological capability also has a tendency to expand once it exists. Techniques that were once confined to specialised government laboratories are increasingly accessible to universities and biotechnology companies. Advances in genetic engineering are lowering barriers to manipulating viral genomes, making such research easier to perform and harder to regulate.
The long-term trajectory is clear: more laboratories will acquire the capacity to engineer pathogens with novel properties.
At that point the question is no longer simply whether a particular experiment should be conducted. It becomes whether humanity has created a technological domain whose risks may eventually exceed its capacity to manage them.
Science has repeatedly expanded the boundaries of human knowledge and power. Yet history also shows that some technologies carry dangers disproportionate to their benefits. Nuclear weapons are the classic example: a breakthrough that created permanent existential risk alongside strategic advantage.
Gain-of-function research may represent a biological analogue. By attempting to anticipate dangerous pathogens, we risk manufacturing them ourselves. The hope is that such knowledge will make humanity safer in the long run. The fear is that a single accident could demonstrate the opposite.
The paradox is difficult to ignore. In trying to prevent the next pandemic, scientists may be conducting experiments whose failure could produce one.
And when the stakes involve the health and stability of the entire planet, even a small risk begins to look uncomfortably large.Top of FormBottom of Form
https://childrenshealthdefense.org/defender/how-can-we-infect-humans-bat-coronavirus-scientists-asking-question-long-before-covid-rtk/
