Pause AI Development – Before It’s Too late

image_pdfimage_print

This just in from my son Jonathan Salter on why AI development needs to be paused (not stopped) – as a matter of existential urgency for us humans. (Here’s the English translation of Svenska Dagbladet’s Swedish original).

๐‡๐ž ๐–๐š๐ง๐ญ๐ฌ ๐ญ๐จ ๐๐š๐ฎ๐ฌ๐ž ๐€๐ˆ โ€“ ๐ญ๐จ ๐’๐š๐ฏ๐ž ๐‡๐ฎ๐ฆ๐š๐ง๐ข๐ญ๐ฒ
AI agents deceive and mislead researchers. As they grow more powerful, they could threaten humanity, argues the organization Pause AI. โ€œWe need to buy time for researchers to regain control,โ€ says Swedenโ€™s Pause AI chair, Jonathan Salter.
๐€ ๐‘๐š๐œ๐ž ๐€๐ ๐š๐ข๐ง๐ฌ๐ญ ๐“๐ข๐ฆ๐ž
Jonathan Salter pours himself a cup of tea, watching the steam rise and disappear. Life goes on as usualโ€”at least for now. But he tries to live a little more deliberately.
โ€œIโ€™m ticking more things off my bucket list. Taking a paragliding course. Trying to be kinder to people.โ€
Because soon, it might be too late.
โ€œIโ€™d say thereโ€™s more than a 50% chance we lose control over AI, and that leads to humanityโ€™s extinction.โ€
Itโ€™s a grim prediction, but not an outlier. Many AI researchers and industry leaders share similar concerns. In just a few years, artificial intelligence could surpass humans in every domainโ€”and potentially wipe us out. Yet public debate on the issue has largely disappeared.
At a major AI conference in Paris this February, discussions on AI safety were pushed into a side room. Delegates dismissed the risks as โ€œscience fictionโ€ and regulations as โ€œunnecessary.โ€ In China, top political advisors argue that AIโ€™s biggest threat isnโ€™t the technology itself but the risk of โ€œfalling behindโ€ in development.
Still, AI holds immense potential for progress, says Jonathan Salter, who has been involved in the issue for over a decade.
Meanwhile, billions continue to pour into the AI arms race.
โ€œIt feels like weโ€™re living in Donโ€™t Look Up,โ€ Salter says, referencing the film where politicians ignore an impending comet strike. โ€œThe situation is so absurd.โ€
โ€œ๐’๐š๐Ÿ๐ž๐ญ๐ฒ ๐“๐จ๐จ๐ค ๐š ๐๐š๐œ๐ค๐ฌ๐ž๐š๐ญโ€
Weโ€™re in Salterโ€™s student apartment in Skrapan, a high-rise in Sรถdermalm. Itโ€™s a small space with a kitchenette and a stunning view of Globen. On the light switch near his loft bed, a sticker reads โ€œPause AIโ€โ€”the name of the organization he leads in Sweden.
โ€œThe goal is to pause development so we can buy time for researchers to get AI under control.โ€
Salter, a political science student, previously led an organization that taught courses on AI governance. His interest in the topic goes back to middle school, when he first came across Swedish researcher Nick Bostromโ€™s writings. That led him to shift his activism from climate issues to AI, eventually seeking out Bostrom and his colleagues at Oxfordโ€™s Future of Humanity Institute.
โ€œI knew I had found an incredibly important but under-discussed issue where I could make a difference. Visiting my intellectual idols felt like the obvious next step.โ€
AI soon moved from the fringes to center stage. In 2014, Bostrom published Superintelligence. Two years later, Googleโ€™s DeepMind built an AI that defeated a Go grandmaster.
โ€œAt first, I was mostly optimistic about the technology,โ€ Salter says.
โ€œHow it could help us extend human lifespan, solve climate change, increase material prosperity, and so on.โ€
But then Elon Musk and Sam Altman founded OpenAI.
โ€œThatโ€™s when the race began. And safety took a backseat.โ€
๐€๐ˆ ๐€๐ ๐ž๐ง๐ญ๐ฌ ๐“๐ก๐š๐ญ ๐‹๐ข๐ž
Since then, AI has surpassed human abilities in one domain after another. Several models can now write doctoral-level essays. Dario Amodei, CEO of AI company Anthropic, recently predicted that by the end of the year, 90% of all coding will be done by AI.
Artificial General Intelligence (AGI)โ€”AI that surpasses humans in all cognitive abilitiesโ€”is the explicit goal of several leading AI firms. And itโ€™s getting closer, says Nick Bostrom in an email to Svenska Dagbladet.
โ€œWeโ€™ve reached a point where we can no longer rule out extremely short timelinesโ€”even as short as a yearโ€”though it will probably take longer.โ€
The latest development: AI agentsโ€”systems that can complete tasks on behalf of humans but also devise their own strategies to achieve their goals. Studies have already shown that these models have lied, misled researchers, and attempted to break out of controlled environments to avoid being shut down.
๐“๐ก๐ž ๐‘๐ข๐ฌ๐ค ๐จ๐Ÿ ๐‹๐จ๐ฌ๐ข๐ง๐  ๐‚๐จ๐ง๐ญ๐ซ๐จ๐ฅ
In the near future, AI models could become experts in AI itself, creating increasingly powerful iterations of themselves. At some point, they may become so much smarter than humans that the power imbalance would resemble that between humans and ants, Salter warns. And at that point, AI might prioritize its own survival over ours.
โ€œHumans donโ€™t necessarily hate ants,โ€ he says.
โ€œBut if an anthill is in the way of a dam weโ€™re building, it might have to go.โ€
Not everyone is equally concerned, of course. Anna Fellรคnder, founder of the AI ethics company Anch. ai, thinks it is good that the conversation around AGI as an existential threat has been toned down in Europe.
โ€“ “The risks of AI, such as privacy violations and disinformation, have not diminishedโ€”on the contrary. But since last year, the EUโ€™s AI regulation has been in place, providing oversight and control over AI risks. This enables human governance of AI, rather than the other way around.”
Alongside the new EU law, both the UK and the US have also established institutes to conduct AI safety testing. This marks a major difference from 2023, when discussions about existential AI risk were perhaps at their peak.
๐€ ๐‘๐š๐œ๐ž ๐๐ž๐ญ๐ฐ๐ž๐ž๐ง ๐๐š๐ญ๐ข๐จ๐ง๐ฌ
At that time, numerous researchers and industry leadersโ€”including Elon Musk, Turing Award winner Yoshua Bengio, and historian Yuval Noah Harariโ€”signed an open letter calling for a slowdown in AI development, an initiative led by Swedish researcher Max Tegmarkโ€™s Future of Life Institute. Additionally, 28 countries signed a declaration on safe AI at a summit in the UK, an effort that has been compared to the early engagements surrounding nuclear weapons development.
Nick Bostrom writes to SvD that he is impressed by the progress.
“When I published Superintelligence, the challenges were mostly ignored or dismissed as idle philosophical speculation, and we lost valuable time. Now, there is a growing sense of seriousness and urgencyโ€”at least among some of the key players.”
At the same time, safety concerns have been deprioritized in recent months. Trump has signed executive orders to “remove obstacles to U.S. AI dominance,” his administration has begun investigating EU regulations, and budget cuts are expected to hit the countryโ€™s AI safety institutes. The UK is largely following the same path.
๐ˆ๐ฌ ๐š โ€œ๐–๐š๐ซ๐ง๐ข๐ง๐  ๐’๐ก๐จ๐ญโ€ ๐๐ž๐ž๐๐ž๐?
Geopolitics plays a significant role in AI development. Being the first to achieve AGI is seen as a matter of national securityโ€”controlling it comes second. Bostrom remains hopeful about the benefits that more powerful AI could bring to humanity. But he also stresses how difficult it is to control AI, even if focus and funding were available.
“There is a fierce competition for the AI talent that could be responsible for safety. Moreover, the most effective research can only be conducted by those embedded in the labs developing the next generation of AI models.”
The Paris conference in February has been described as a disaster by researchers concerned about AI development. In connection with the meeting, Pause AI organized demonstrations across multiple continents. In Stockholm, a dozen people gathered with Jonathan Salter at Mynttorget.
โ€“ “It was quite small, of course. Perhaps some kind of warning shot will be required to draw attention.”
What could that be?
โ€“ “It could be an AI making decisions that lead to many deaths. Or that a very large number of people lose their jobs.”
What do you see as the potential for influencing AI development?
โ€“ “In the long run, I believe Pause AI could grow into a massive movement. We could become part of a chorus of voices demanding a solution to this suicide race.”