Why Didn't Artificial Intelligence Save Us From Covid-19?

The key to good AI is solid data, and that’s been tough to come by in a global health crisis.
A doctor standing at sink in a hazmat suit
Photograph: Chris McGrath/Getty Images

In late January, more than a week before Covid-19 had been given that name, hospitals in Wuhan, China, began testing a new method to screen for the disease, using artificial intelligence. The plan involved chest CTs—three-dimensional scans of lungs displayed in finely detailed slices. By studying thousands of such images, an algorithm would learn to decipher whether a given patient's pneumonia appeared to stem from Covid-19 or something more routine, like influenza.

In the US, as the virus spread in February, the idea appeared to hold promise: With conventional tests in short supply, here was a way to get more people screened, fast. Health professionals, however, weren't so sure. Although various diagnostic algorithms have won approval from the US Food and Drug Administration—for wrist fractures, eye diseases, breast cancer—they generally spend months or years in development. They're deployed in different hospitals filled with different kinds of patients, interrogated for flaws and biases, pruned and tested again and again.

Was there enough data on the new virus to truly discern one pneumonia from another? What about mild cases, where the damage may be less clear? The pandemic wasn't waiting for answers, but medicine would have to.

In late March, the United Nations and the World Health Organization issued a report examining the lung CT tool and a range of other AI applications in the fight against Covid-19. The politely bureaucratic assessment was that few projects had achieved “operational maturity.”

The limitations were older than the crisis, but aggravated by it. Reliable AI depends on our human ability to collect data and make sense of it. The pandemic has been a case study in why that's hard to do mid-crisis. Consider the shifting advice on mask wearing and on taking ibuprofen, the doctors wrestling with who should get a ventilator and when. Our daily movements are dictated by uncertain projections of who will get infected or die, and how many more will die if we fail to self-isolate.

As we sort out that evidence, AI lags a step behind us. Yet we still imagine that it possesses more foresight than we do.

Take drug development. One of the flashiest AI experiments is by Google-affiliated DeepMind. The company's AlphaFold system is a champion at the art of protein modeling—predicting the shape of tiny structures that make up the virus. In the lab, divining those structures can be a months-long process; DeepMind, when it released schematics for six viral proteins in March, had done it in days. The models were approximations, the team cautioned, churned out by an experimental system. But the news left an impression: AI had joined the vaccine race.

In the vaccine community, however, the effort elicited a shrug.

“I can't see much of a role for AI right now,” says Julia Schaletzky, a veteran drug discovery researcher and head of UC Berkeley's Center for Emerging and Neglected Diseases. Plenty of well-defined protein targets have been confirmed in labs without the help of AI. It would be risky to spend precious time and grants starting from scratch, using the products of an experimental system. Technological progress is good, Schaletzky says, but it's often pushed at the expense of building on what's known and promising.

She says there's potential in using AI to help find treatments. AI algorithms can complement other data-mining techniques to help us sift through reams of information we already have—to spot encouraging threads of research, for example, or older treatments that hold promise. One drug identified this way, baricitinib, is now going to clinical trials. Another hope is that AI could yield insights into how Covid-19 attacks the body. An algorithm could mine lots of patient records and determine who is more at risk of dying and who is more likely to survive, turning anecdotes whispered between doctors into treatment plans.

But again, it's all a matter of data—what data we've already gathered, and whether we've organized it in a way that's useful for machines. Our health care system doesn't give up information easily to train such systems; privacy regulations and balkanized data silos will stop you even before the antiquated, error-filled health databases do.

It's possible this crisis will change that. Maybe it will push us to rethink how data is stored and shared. Maybe we'll keep studying this virus even after the chaos dissipates and the attention wanes, giving us solid data—and better AI—when the next pandemic arrives. For now, though, we can't be surprised that AI hasn't saved us from this one.


This article appears in the June issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.


Special Series: The Future of Thinking Machines