The Rise of the AI Doomsday Cult: Inside Rishi Sunak's Quest to Save Humanity

This morning, a remark by Dr. Yann LuCun caught my eye. He was making fun of the UK Prime Minister's AI Safety efforts published in an article in the Telegraph. Here is Dr. LuCun's tweet:

So, I decided to research this a bit, and here it is.

UK Prime Minister Rishi Sunak has positioned himself as the savior of humanity from the existential threat of artificial intelligence. But his embrace of AI alarmists raises serious questions.  In recent months, apocalyptic rhetoric about AI has reached a fever pitch in the halls of 10 Downing Street. Sunak is assembling a team of technophobic advisors, granting them unprecedented access to shape UK policy. He plans to make AI safety his "climate change" legacy moment.

At the center of this network of catastrophists is the mysterious Frontier AI taskforce, led by tech investor Ian Hogarth. Hogarth has assembled a cadre of researchers affiliated with the "effective altruism" movement, which views AI as an extinction-level event requiring drastic action.  Three of the six organizations advising Hogarth's task force received grants from the now-bankrupt FTX cryptocurrency exchange, founded by alleged fraudster Sam Bankman-Fried. Effective altruism has drawn criticism for its close ties to Bankman-Fried. But Downing Street seems unconcerned by these alarming connections.

Hogarth recently told Parliament that the task force deals with "fundamental national security matters." He warns that advanced AI could empower bad actors to orchestrate cyberattacks manipulate biology, and other nefarious ends. While these risks shouldn't be dismissed, his hyperbolic rhetoric hardly sounds like level-headed policymaking.

Sunak's AI summit at Bletchley Park this November is set to focus almost exclusively on doomsday scenarios and AI risk mitigation. Matt Clifford, an entrepreneur who chairs the government's Aria research lab, leads preparations for the summit alongside senior diplomat Jonathan Black. This "AI sherpa" duo recently traveled to Beijing to drum up support for Sunak's vision of aggressive global AI regulation.

Sunak has held closed-door meetings with leaders of prominent AI labs, including Demis Hassabis of Google's DeepMind, Sam Altman of OpenAI, and Dario Amodei of Anthropic. These companies benefit tremendously if the UK imposes stringent restrictions on who can build advanced AI models. While Sunak portrays this as reining in Big Tech, it may entrench their dominance.

Dr. LuCun diagnosed Sunak with an "Existential Fatalistic Risk from AI Delusion" disease. What Dr. LuCun meant by his tweet is that Sunak's technophobic advisors are stoking irrational fears that could inhibit innovation and economic progress. The PM's climate change ambitions did not require dismantling the auto industry. So why take a sledgehammer to AI research in the name of safety?

Proponents counter that Sunak is merely trying to get ahead of the curve on transformative technology. They believe the UK can lead the world in developing a thoughtful governance framework before mass adoption. However, this preventive approach risks severely restricting AI applications that could benefit humanity. For example, advanced natural language AI could expand educational access worldwide. Algorithms can personalize instruction and provide customized feedback beyond the reach of human teachers. However, overzealous regulation could hamper these technologies before they realize their potential.

Powerful AI also holds tremendous promise for scientific research. Systems like DeepMind's AlphaFold have significantly accelerated protein folding predictions. This could enable rapid drug discovery to cure diseases afflicting millions globally. But if research is conducted under a cloud of existential dread, progress may happen much more slowly.  And contrary to the alarmist view, advanced AI could help address global catastrophic risks. AI safety techniques like robust alignment ensure advanced systems behave as intended. Such research is vital, as AI will likely be essential in tackling complex challenges like climate change.

Rather than entrusting global AI policy to a small group of catastrophists, UK leadership should incorporate diverse perspectives. They must balance risks with enormous opportunities to improve human life. And they should evaluate scenarios rigorously rather than making knee-jerk reactions to existential angst.

Sunak faces growing calls to broaden his circle of AI advisors. Over 100 experts wrote an open letter urging him to consult the UK AI Council before finalizing any national policies. This diverse group would provide a valuable counterbalance to the doom-and-gloom task force. The British public deserves policies grounded in evidence, not quasi-religious prophecies of imminent societal collapse. AI has challenges to overcome but also vast potential. With wise governance, the UK can steer a prudent course that allows humanity to thrive with increasingly intelligent machines.

Rather than spreading hysteria, Sunak should provide the steady leadership required to meet this historic opportunity. The overriding goal should be maximizing prosperity for current and future generations. We need clear-sightedness, not eschatology, from 10 Downing Street.

Sunak faces a choice between pragmatic statesmanship or becoming a high priest of the AI doomsday cult. Let us hope wisdom prevails over catastrophic thinking in shaping one of the most consequential technologies ever created. The future depends on it.

Listen to this Post