‘An AI Fukushima is inevitable’: scientists discuss technology’s immense potential and dangers | Science
When better to hold a conference on artificial intelligence and the countless ways it is advancing science than in those brief days between the first Nobel prizes being awarded in the field and the winners heading to Stockholm for the lavish white tie ceremony?
It was fortuitous timing for Google DeepMind and the Royal Society who this week convened the AI for Science Forum in London. Last month, Google DeepMind bagged the Nobel prize in chemistry a day after AI took the physics prize. The mood was celebratory.
Scientists have worked with AI for years, but the latest generation of algorithms have brought us to brink of transformation, Demis Hassabis, the chief executive officer of Google DeepMind, told the meeting. “If we get it right, it should be an incredible new era of discovery and a new golden age, maybe even a kind of new renaissance,” he said.
Plenty could dash the dream. AI is “not a magic bullet,” Hassabis said. To make a breakthrough, researchers must identify the right problems, collect the right data, build the right algorithms and apply them the right way.
Then there are the pitfalls. What if AI provokes a backlash, worsens inequality, creates a financial crisis, triggers a catastrophic data breach, pushes ecosystems to the brink through its extraordinary energy demands? What if it gets into the wrong hands and unleashes AI-designed bioweapons?
Siddhartha Mukherjee, a cancer researcher at Columbia University in New York and author of the Pulitzer prize-winning The Emperor of All Maladies, suspects these will be hard to navigate. “I think it’s almost inevitable that, at least in my lifetime, there will be some version of an AI Fukushima,” he said, referring to the nuclear accident caused by the 2011 Japanese tsunami.
Many AI researchers are optimistic. In Nairobi, nurses are trialling AI-assisted ultrasound scans for pregnant women, bypassing the need for years of training. Materiom, a London company, uses AI to formulate 100% bio-based materials, sidestepping petrochemicals. AI has transformed medical imaging, climate models and weather forecasts and is learning how to contain plasmas for nuclear fusion. A virtual cell is on the horizon, a unit of life in silicon.
Hassabis and his colleague John Jumper won their Nobel for AlphaFold, a programme that predicts protein structures and interactions. It is used across biomedical science, in particular for drug design. Now, researchers at Isomorphic, a Google DeepMind spinout, are beefing up the algorithm and combining it with others to accelerate drug development. “We hope that one day, in the near future actually, we will reduce the time from years, maybe even decades to design a drug, down to months, or perhaps even weeks, and that would revolutionise the drug discovery process,” Hassabis said.
The Swiss pharmaceutical company Novartis has gone further. Beyond designing new drugs, AI speeds recruitment to clinical trials, reducing a potentially years-long process to months. Fiona Marshall, the company’s president of biomedical research, said another tool helps with regulators’ queries. “You can find out – have those questions been asked before – and then predict what’s the best answer to give that’s likely to give you a positive approval for your drug,” she said.
Jennifer Doudna, who shared a Nobel prize for the gene editing tool, Crispr, said AI would play “a big role” in making therapies more affordable. Regulators approved the first Crispr treatment last year, but at $2m (£1.6m) for each patient, scores will not benefit. Doudna, who founded the Innovative Genomics Institute in Berkeley, California, said further AI-guided work at her lab aims to create a methane-free cow by editing the microbes in the animal’s gut.
A huge challenge for researchers is the black box problem: many AIs can reach decisions but not explain them, making the systems hard to trust. But that may be about to change, Hassabis said, through the equivalent of brain scans for AIs. “I think in the next five years we’ll be out of this era that we’re currently in of black boxes.”
The climate crisis could prove AI’s greatest challenge. While Google publicises AI-driven advances in flooding, wildfire and heatwave forecasts, like many big tech companies, it uses more energy than many countries. Today’s large models are a major culprit. It can take 10 gigawatt-hours of power to train a single large language model like OpenAI’s ChatGPT, enough to supply 1,000 US homes for a year.
“My view is that the benefits of those systems will far outweigh the energy usage,” Hassabis told the meeting, citing hopes that AI will help to create new batteries, room temperature superconductors and possibly even nuclear fusion. “I think one of these things is likely to pay off in the next decade, and that will completely, materially change the climate situation.”
He sees positives in Google’s energy demand, too. The company is committed to green energy, he said, so the demand should drive investment into renewables and drive down costs.
Not everyone was convinced. Asmeret Asefaw Berhe, a former director of the US Department of Energy’s Office of Science, said advances in AI could drive suffering, adding that nothing raised the concern more than energy demand. She called for ambitious sustainability goals. “AI companies that are involved in this space are investing a lot in renewable energy and hopefully that will spur a faster transition away from fossil fuels. But is that enough?” she asked. “It actually has to lead to transformative change.”