‘Most dangerous technology ever’: Protesters urge AI pause​on February 7, 2025 at 4:00 am

Google this week quietly revised its artificial intelligence ethics, no longer ruling out military or surveillance use.

​Google this week quietly revised its artificial intelligence ethics, no longer ruling out military or surveillance use.   

By David Swan

February 7, 2025 — 2.00pm

, register or subscribe to save articles for later.

Protesters will take to the streets this weekend to campaign against harmful uses of AI and urge the Australian government to help freeze advancements in what is being described as potentially “the most dangerous technology ever created”.

A global protest movement dubbed PauseAI is descending on cities including Melbourne ahead of next week’s Artificial Intelligence Action Summit, to be held in Paris. The protesters say the summit lacks any focus on AI safety.

China and the US are each racing ahead with AI development. Millions of Australians are using Chinese AI app DeepSeek and the US is spending $US500 billion to accelerate its own AI efforts, in a joint venture dubbed Stargate.

DeepSeek, a Chinese-owned open-source artificial intelligence platform, was developed with far less investment, time, and infrastructure than its US big-tech competitors.
DeepSeek, a Chinese-owned open-source artificial intelligence platform, was developed with far less investment, time, and infrastructure than its US big-tech competitors.Credit: Bloomberg

PauseAI founder Joep Meindertsma, who started the political movement in 2023, said companies like OpenAI and DeepSeek were not taking enough precautions to make sure their AI models were safe to be released into the world. The protesters are demanding the creation of an international AI Pause treaty, which would halt the training of AI systems more powerful than GPT-4, until they can be built safely and democratically.

“It’s not a secret any more that AI could be the most dangerous technology ever created,” Meindertsma told this masthead.

Loading

“We need our leaders to act on the small chance that things can go very wrong, very soon. Our psychology makes it difficult to believe and act on these dangers. Invisible dangers that are as abstract as this one don’t seem to alarm people as much as they should.”

The protest is a reflection of growing concerns that AI development is progressing faster than society can keep up, given experts still don’t understand the inner workings of AI systems like ChatGPT. Google this week also quietly revised its AI ethics, no longer ruling out military or surveillance use.

Meindertsma said the three most cited AI researchers, Geoffrey Hinton, Yoshua Bengio, Ilya Sutskever, had each now publicly said the technology could potentially lead to human extinction.

Advertisement

Minister for Industry and Science Ed Husic in December announced a National AI Capability Plan, which is due by the end of 2025.
Minister for Industry and Science Ed Husic in December announced a National AI Capability Plan, which is due by the end of 2025.Credit: Alex Ellinghausen

Rather than relying on individual nations like Australia to provide safety measures, Meindertsma said action at the global summit was essential, so that governments could make collective decisions and stop trying to race ahead of one another.

“If the organisers choose to stick their head in the sand and ignore the need for safety, we won’t get any meaningful international regulations,” he said.

Next week’s summit follows similar summits held in Bletchley, England and Seoul, Korea that delivered the Bletchley Declaration and the Seoul Declaration. Federal Industry and Science Minister Ed Husic will not be travelling to Paris for the summit, though senior government officials will be in attendance to represent Australia.

Husic in December announced a National AI Capability Plan, which is due by the end of 2025. Last year, he also released proposed mandatory guardrails to shape the use of AI in high-risk settings, as well as the first version of a voluntary AI safety standard.

Saturday’s protest is being held at 2pm at Melbourne’s State Library. Organisers said they had had a dozen supporters register but they were hoping for more on the day.

PauseAI supporter Michael Huang said: “We want to push for the Australian government to get more involved in these international negotiations, and we want this to become a mainstream policy topic for public discussion, instead of companies just determining the future.

“General AI systems could be used to develop new drugs, and new bioweapons, and it’s important we set up global regulations and conduct more research to find out whether it’s possible to make the technology safe. And if it turns out not to be possible, then we need a global moratorium.”

Co-organiser Mark Brown said, at a minimum, the protest was designed to amplify the discussion around AI safety. “If you get a system that is smarter than your species, and you don’t have a plan, you’re going to have a problem,” he said.

The Market Recap newsletter is a wrap of the day’s trading. Get it each weekday afternoon.

Loading

 


Discover more from World Byte News

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from World Byte News

Subscribe now to keep reading and get access to the full archive.

Continue reading