The UN Security Council for the first time held a session on Tuesday on the threat of artificial intelligence to global peace and stability, and Secretary General António Guterres called for a global watchdog to oversee a new technology that has caused at least as many fear as much as hope.
Mr. Guterres warned that AI could ease a path for criminals, terrorists and other actors who seek to cause “death and destruction, mass trauma, and profound psychological harm on an unimaginable scale.”
Last year’s launch of ChatGPT — which can create texts from messages, imitate voice and generate images, photos and videos — raised alarms about disinformation and manipulation.
On Tuesday, diplomats and leading experts in the field of AI laid out for the Security Council the risks and threats — along with the scientific and social benefits — of the new emerging technology. Much remains unknown about the technology despite its rapid development, they say.
“It’s like we’re building machines without understanding the science of combustion,” said Jack Clark, co-founder of Anthropic, an AI safety research company. Private companies, he said, should not be the sole creators and regulators of AI
Mr. Guterres said a UN watchdog should act as a governing body to regulate, monitor and enforce AI regulations in much the same way other agencies handle aviation, climate and nuclear energy.
The proposed agency will be composed of experts in the field who have shared their expertise with governments and administrative agencies that may lack the technical know-how to address AI threats.
But the prospect of a legally binding resolution regarding its governance remains remote. However, most diplomats did, endorse the idea of a global governance mechanism and a set of international rules.
“No country will be unaffected by AI, so we must involve and engage with the broadest coalition of international actors from all sectors,” said Britain’s foreign secretary, James Cleverly, who chaired the meeting as Britain held the rotating presidency of the Council this month.
Russia, departing from the Council’s majority view, has expressed skepticism that enough is known about the dangers of AI to elevate it as a source of threats to global instability. And China’s ambassador to the United Nations, Zhang Jun, pushed against the creation of a set of international laws and said that international regulatory bodies should be flexible enough to allow countries to develop their own rules.
However, China’s ambassador said his country opposed the use of AI as a “means to create military hegemony or undermine a country’s sovereignty.”
The military’s use of autonomous weapons on the battlefield or abroad for assassinations, such as the satellite-controlled AI robot that Israel sent to Iran to kill a top nuclear scientist, Mohsen Fakhrizadeh, has been brought up also.
Mr. Guterres said the United Nations should have a legally binding agreement by 2026 banning the use of AI in automated weapons of war.
Prof. Rebecca Willett, director of AI at the Data Science Institute at the University of Chicago, said in an interview that as technology evolves, it’s important not to lose sight of the people behind it.
Systems are not completely autonomous, and the people who design them need to be held accountable, he said.
“This is one of the reasons the UN is looking at this,” Professor Willett said. “There really needs to be international repercussions so that a company based in one country does not destroy another country without violating international agreements. Real enforceable regulation can make things better and safer.