This just in from my son Jonathan Salter on why AI development needs to be paused (not stopped) – as a matter of existential urgency for us humans. (Here’s the English translation of Svenska Dagbladet’s Swedish original).
๐๐ ๐๐๐ง๐ญ๐ฌ ๐ญ๐จ ๐๐๐ฎ๐ฌ๐ ๐๐ โ ๐ญ๐จ ๐๐๐ฏ๐ ๐๐ฎ๐ฆ๐๐ง๐ข๐ญ๐ฒ
AI agents deceive and mislead researchers. As they grow more powerful, they could threaten humanity, argues the organization Pause AI. โWe need to buy time for researchers to regain control,โ says Swedenโs Pause AI chair, Jonathan Salter.
๐ ๐๐๐๐ ๐๐ ๐๐ข๐ง๐ฌ๐ญ ๐๐ข๐ฆ๐
Jonathan Salter pours himself a cup of tea, watching the steam rise and disappear. Life goes on as usualโat least for now. But he tries to live a little more deliberately.
โIโm ticking more things off my bucket list. Taking a paragliding course. Trying to be kinder to people.โ
Because soon, it might be too late.
โIโd say thereโs more than a 50% chance we lose control over AI, and that leads to humanityโs extinction.โ
Itโs a grim prediction, but not an outlier. Many AI researchers and industry leaders share similar concerns. In just a few years, artificial intelligence could surpass humans in every domainโand potentially wipe us out. Yet public debate on the issue has largely disappeared.
At a major AI conference in Paris this February, discussions on AI safety were pushed into a side room. Delegates dismissed the risks as โscience fictionโ and regulations as โunnecessary.โ In China, top political advisors argue that AIโs biggest threat isnโt the technology itself but the risk of โfalling behindโ in development.
Still, AI holds immense potential for progress, says Jonathan Salter, who has been involved in the issue for over a decade.
Meanwhile, billions continue to pour into the AI arms race.
โIt feels like weโre living in Donโt Look Up,โ Salter says, referencing the film where politicians ignore an impending comet strike. โThe situation is so absurd.โ
โ๐๐๐๐๐ญ๐ฒ ๐๐จ๐จ๐ค ๐ ๐๐๐๐ค๐ฌ๐๐๐ญโ
Weโre in Salterโs student apartment in Skrapan, a high-rise in Sรถdermalm. Itโs a small space with a kitchenette and a stunning view of Globen. On the light switch near his loft bed, a sticker reads โPause AIโโthe name of the organization he leads in Sweden.
โThe goal is to pause development so we can buy time for researchers to get AI under control.โ
Salter, a political science student, previously led an organization that taught courses on AI governance. His interest in the topic goes back to middle school, when he first came across Swedish researcher Nick Bostromโs writings. That led him to shift his activism from climate issues to AI, eventually seeking out Bostrom and his colleagues at Oxfordโs Future of Humanity Institute.
โI knew I had found an incredibly important but under-discussed issue where I could make a difference. Visiting my intellectual idols felt like the obvious next step.โ
AI soon moved from the fringes to center stage. In 2014, Bostrom published Superintelligence. Two years later, Googleโs DeepMind built an AI that defeated a Go grandmaster.
โAt first, I was mostly optimistic about the technology,โ Salter says.
โHow it could help us extend human lifespan, solve climate change, increase material prosperity, and so on.โ
But then Elon Musk and Sam Altman founded OpenAI.
โThatโs when the race began. And safety took a backseat.โ
๐๐ ๐๐ ๐๐ง๐ญ๐ฌ ๐๐ก๐๐ญ ๐๐ข๐
Since then, AI has surpassed human abilities in one domain after another. Several models can now write doctoral-level essays. Dario Amodei, CEO of AI company Anthropic, recently predicted that by the end of the year, 90% of all coding will be done by AI.
Artificial General Intelligence (AGI)โAI that surpasses humans in all cognitive abilitiesโis the explicit goal of several leading AI firms. And itโs getting closer, says Nick Bostrom in an email to Svenska Dagbladet.
โWeโve reached a point where we can no longer rule out extremely short timelinesโeven as short as a yearโthough it will probably take longer.โ
The latest development: AI agentsโsystems that can complete tasks on behalf of humans but also devise their own strategies to achieve their goals. Studies have already shown that these models have lied, misled researchers, and attempted to break out of controlled environments to avoid being shut down.
๐๐ก๐ ๐๐ข๐ฌ๐ค ๐จ๐ ๐๐จ๐ฌ๐ข๐ง๐ ๐๐จ๐ง๐ญ๐ซ๐จ๐ฅ
In the near future, AI models could become experts in AI itself, creating increasingly powerful iterations of themselves. At some point, they may become so much smarter than humans that the power imbalance would resemble that between humans and ants, Salter warns. And at that point, AI might prioritize its own survival over ours.
โHumans donโt necessarily hate ants,โ he says.
โBut if an anthill is in the way of a dam weโre building, it might have to go.โ
Not everyone is equally concerned, of course. Anna Fellรคnder, founder of the AI ethics company Anch. ai, thinks it is good that the conversation around AGI as an existential threat has been toned down in Europe.
โ “The risks of AI, such as privacy violations and disinformation, have not diminishedโon the contrary. But since last year, the EUโs AI regulation has been in place, providing oversight and control over AI risks. This enables human governance of AI, rather than the other way around.”
Alongside the new EU law, both the UK and the US have also established institutes to conduct AI safety testing. This marks a major difference from 2023, when discussions about existential AI risk were perhaps at their peak.
๐ ๐๐๐๐ ๐๐๐ญ๐ฐ๐๐๐ง ๐๐๐ญ๐ข๐จ๐ง๐ฌ
At that time, numerous researchers and industry leadersโincluding Elon Musk, Turing Award winner Yoshua Bengio, and historian Yuval Noah Harariโsigned an open letter calling for a slowdown in AI development, an initiative led by Swedish researcher Max Tegmarkโs Future of Life Institute. Additionally, 28 countries signed a declaration on safe AI at a summit in the UK, an effort that has been compared to the early engagements surrounding nuclear weapons development.
Nick Bostrom writes to SvD that he is impressed by the progress.
“When I published Superintelligence, the challenges were mostly ignored or dismissed as idle philosophical speculation, and we lost valuable time. Now, there is a growing sense of seriousness and urgencyโat least among some of the key players.”
At the same time, safety concerns have been deprioritized in recent months. Trump has signed executive orders to “remove obstacles to U.S. AI dominance,” his administration has begun investigating EU regulations, and budget cuts are expected to hit the countryโs AI safety institutes. The UK is largely following the same path.
๐๐ฌ ๐ โ๐๐๐ซ๐ง๐ข๐ง๐ ๐๐ก๐จ๐ญโ ๐๐๐๐๐๐?
Geopolitics plays a significant role in AI development. Being the first to achieve AGI is seen as a matter of national securityโcontrolling it comes second. Bostrom remains hopeful about the benefits that more powerful AI could bring to humanity. But he also stresses how difficult it is to control AI, even if focus and funding were available.
“There is a fierce competition for the AI talent that could be responsible for safety. Moreover, the most effective research can only be conducted by those embedded in the labs developing the next generation of AI models.”
The Paris conference in February has been described as a disaster by researchers concerned about AI development. In connection with the meeting, Pause AI organized demonstrations across multiple continents. In Stockholm, a dozen people gathered with Jonathan Salter at Mynttorget.
โ “It was quite small, of course. Perhaps some kind of warning shot will be required to draw attention.”
What could that be?
โ “It could be an AI making decisions that lead to many deaths. Or that a very large number of people lose their jobs.”
What do you see as the potential for influencing AI development?
โ “In the long run, I believe Pause AI could grow into a massive movement. We could become part of a chorus of voices demanding a solution to this suicide race.”