Press "Enter" to skip to content

Op-Ed: Do ‘Lockdowns’ Work?

In response to the COVID-19 pandemic, there have been thousands of specific policies instituted around the globe. At times, restrictions have been big and bold. Recently, the city of Perth, Australia, was placed in “lockdown” after a single case of SARS-CoV-2. Other restrictions have been focused: removing swings from playgrounds, capping the number of dining guests, or limiting the time for meals (90 minutes). One Toronto suburb closed outdoor ice-skating rinks, toboggan hills, and dog parks. The sheer variety of restrictions meant to curb the spread of SARS-CoV-2 raises an important question: which ones work? And how big are their effects?

I suspect that for many restrictions — perhaps even most restrictions — we will never know. We will never know, for instance, if removing the rim from a basketball hoop or closing a toboggan hill slowed SARS-CoV-2 where these strategies were deployed. For larger interventions — mandatory business closure and stay at home orders, colloquially called “lockdowns” — we may someday have a scientific consensus as to whether and to what degree this practice changes viral spread, but I believe that day is years away. Here, I wish to highlight challenges with understanding these interventions.

Challenges with identifying the effect of lockdowns

Already some studies have yielded mixed results. One analysis has found that mandatory business closure and stay-at-home orders were not associated with a reduction in infections. Another analysis did find a reduction. Both have limitations. Allow me to highlight some challenges that most research on this topic will face.

First, lockdown is not expected to yield immediate results. One has to account for the typical lag before an effect can realistically show up, but this introduces analytic flexibility. Should we look for effects 7 days later, or 5 or 15? If we look too soon, we might get the wrong idea. An intervention that slows viral spread may appear to lead to viral spread, because we deployed it on the upslope (reverse-causation). Alternatively, if we lag the analysis too much, we might see the impact of other interventions or the natural shape of the pandemic curve.

Second, places often institute multiple restrictions concurrently, alongside powerful media messages to the public. In other words, was it the business closure that helped, the fact the nightly news scared people, or was it another restriction that occurred at the time of (or close to) the business closure that changed outcomes?

Third, there are many ways to define “lockdown”; many regions, municipalities, or countries to include or exclude; many ways to model the data; and many investigators who will probe the dataset. Put this together, the range of “answers” is certain to vary widely. A team from Edmonton, Alberta, defining lockdown as all business closures of more than 3 days, and looking at the county level data for 12 countries might get a different “answer” than a group from Boston defining lockdown as any nonessential business closure, and looking at national data from 82 countries. Already we see a hint of the variety of answers that may be generated. In the long term, I am hopeful that somewhere in this sea of data, there may be a natural experiment that analysts can take advantage of to provide some clarity.

‘Lockdown’ is not like an aspirin

Fourth, restrictions might be even more complicated. A lockdown is not like an aspirin. It may not exert the same effect every time it is deployed. Lockdowns might have different effects based on the case rate. Lockdowns might help when cases are just a handful, as in Perth, in an effort to drive them to zero. Or lockdowns might only work when case rates are modest (1 case per 10,000 residents). Alternatively, lockdowns might work when case rates are brisk (1 per 1,000 residents). Perhaps lockdowns work in none of these cases, or just the first and second scenario. In other words, the effect of lockdowns may depend on the rate or absolute number of cases, or many other biologic factors (e.g., population density).

Fifth, the same lockdown in the same location with the same enforcement may have diminishing effects. If people are feeling a sense of purpose and camaraderie, there may be a positive effect, but if those same people feel distrustful or fatigued, there may be a negative one. Lockdowns depend on the buy-in of the populace. The desire to isolate yourself wanes over time. What worked in April might not work in November.

Sixth, lockdowns might have different effects based on the culture of the region, the practices of bordering nations, household density, or the political climate. What works in Norway might not work in the U.S. What works in New Zealand might not work in Canada.

Seven, lockdowns depend on media coverage. Previously I alluded to the fact that it will be challenging to separate the effect of lockdown policies from the fact that the media coverage of COVID naturally encourages people to hunker down. Here, consider that lockdowns might work better when media coverage is lax, but less so when it is frenetic. Because in the latter case, the additional people who change behavior due to the mandate may be few.

Eight, lockdowns depend on specific behaviors that drive spread. In a region where daily interactions are driving spread, lockdowns may work. In a region where all spread occurs in a meat processing plant, the same lockdown may have no effect if the plant remains in operation.

These considerations are just a few of the methodologic challenges with figuring out whether lockdowns work. And they do not even touch the harder question of: what are the complete effects of lockdown? What is the effect of restrictions on educational, mental health, cardiovascular, and other societal outcomes? And, when do these occur?

Finally, I must mention that some may frame this entire discussion differently. They may start with the premise that the goal of policies is to separate people to prevent spread, and consider lockdowns alongside all other measures. I support research efforts to examine the question in this manner as well.

I wish this discussion captured all the complexity to lockdowns, but there is one more factor that must be considered. In cancer medicine, methotrexate is an effective drug, but you often have to give leucovorin to overcome the devastating side effects. Similarly, a lockdown might have one set of effects in a nation with a strong social safety net, or strong unemployment insurance, and a different set of effects in a nation with a tattered safety net, and no unemployment insurance. Resources are the antidote to lockdown, and resources are not evenly distributed or deployed. All studies of lockdowns should account for varying resources.

This discussion has just been about one class of restriction or rule, but what about closing playgrounds, or removing swings, or closing ice rinks, or the many other specific policies implemented. For some of these interventions, I believe the plausibility is low. It is unlikely — on the face of it — that closing playgrounds will substantively curb viral spread, and these have met fierce opposition. A future society may look back critically on many of these policies. For many other policies, we may never know whether they helped or hurt.

Pundits need humility

When I think about the past year, and the thousands of interventions we deployed to combat the coronavirus, I am saddened to know we will end with very little idea of which specific interventions helped, which hurt, and which were neutral. Imagine you ran a multi-trillion-dollar study and you didn’t get any answer at all.

Going forward, we must be more thoughtful in applying restrictions. Municipalities should avoid throwing the kitchen sink at once. Implementing things in sequence, with time between policies, can help disentangle which policies help and which do not.

Coordinated action can help. If 20 municipalities work together, and 10 try a set of some interventions and 10 others try different ones, we may start to learn which work, and to what degree. Repeating this experiment is the path to knowledge.

Next, with restrictions must come resources. Each policy meant to curb the spread of the virus may have harmful side effects. These harms must be measured and documented. Right now, we know little about how restrictions affect people, particularly poor and vulnerable ones. Resources can be applied to mitigate these harms. Folks concerned with the downsides of lockdown should not be demonized, ostracized, and marginalized. We must engage with them.

Pundits must have humility. Each day on Twitter, I see doctors, epidemiologists, or policy experts definitively proclaim what we ought to do. These comments often prompt fierce backlash from folks equally confident that these interventions hurt. The truth is there is massive uncertainty and being honest about that might make for more productive conversations and compromise.

Lastly, policymakers must be upfront with the public as to the goals of intervention and under what circumstances the restrictions can be relaxed. Specific benchmarks for when and how policies are deployed and when they can be eased must be posted for public view in advance of deployment.

Finally, we must not confuse matters of science with matters of morality. Most people want to minimize suffering and death, and disagreements are about how to do so, not the goal. Whether and to what degree restrictions work and under what circumstances is a set of scientific questions. Let me be the first to admit that I simply do not know the answers. Moreover, I believe the real answers will take years to emerge. As is often the case in life, distance brings wisdom.

Vinay Prasad, MD, MPH, is a hematologist-oncologist and associate professor of medicine at the University of California San Francisco, and author of Malignant: How Bad Policy and Bad Evidence Harm People With Cancer. The views expressed above are his own and not his institution’s.

Source: MedicalNewsToday.com