Dario Amodei, chief executive of high-profile AI start-up Anthropic, told Congress last year that new AI technology could soon help unskilled but evil humans create a large-scale biological attacksuch as the release of viruses or toxic substances that cause widespread disease and death.
Senators from both parties are alarmed, while AI researchers in industry and academia debate how serious the threat is.
Today, more than 90 biologists and other scientists specializing in AI technologies used to design new proteins — the microscopic mechanisms that drive all creations in biology — have sign an agreement aiming to ensure that their AI-aided research advances without exposing the world to serious harm.
The biologists, who include Nobel laureate Frances Arnold and represent laboratories in the United States and other countries, also argue that the latest technologies will have more benefits than negatives, including new vaccines and drugs.
“While scientists are engaged in this work, we believe that the benefits of current AI technologies for protein design far outweigh the potential for harm, and we want to ensure that our research remains beneficial for all future,” the agreement reads.
The agreement does not seek to suppress the development or distribution of AI technologies. Instead, biologists aim to regulate the use of the equipment needed to make new genetic material.
This DNA manufacturing equipment ultimately allows for the development of bioweapons, said David Baker, the director of the Institute for Protein Design at the University of Washington, who helped shepherd the agreement.
“Protein design is only the first step in making synthetic proteins,” he said in an interview. “You have to actually synthesize the DNA and move the design from the computer to the real world — and that’s the appropriate place to fix.”
The agreement is one of many efforts to weigh the risks of AI against the possible benefits. As some experts warn that AI technologies could help spread disinformation, replace jobs at an unprecedented rate and perhaps even destroy humanity, tech companies, academic labs, regulators and lawmakers are struggling understand these risks and find ways to address them.
The company of Dr. Amodei, Anthropic, develops large-scale language models, or LLMs, the new type of technology that powers online chatbots. When he testified before Congress, he argued that technology could soon help attackers develop new bioweapons.
But he admits that it is not possible now. Anthropic recently conducted a detailed study which shows that if one is trying to acquire or design biological weapons, LLMs are slightly more useful than an ordinary internet search engine.
Dr. is worried. Amodei and others note that as companies improve LLMs and combine them with other technologies, a serious threat will emerge. He told Congress that it was only two to three years away.
OpenAI, the maker of the ChatGPT online chatbot, ran a similar study that showed that LLMs are not significantly more dangerous than search engines. Aleksander Mądry, a professor of computer science at the Massachusetts Institute of Technology and head of OpenAI preparation, said he expects researchers to continue to improve these systems, but he has yet to see evidence that they can create new bioweapons. .
Today’s LLMs are created by analyzing massive amounts of digital text taken from all over the internet. This means they regurgitate or recombine what is available online, including existing information on biological attacks. (The New York Times sued OpenAI and its partner, Microsoft, accusing them of copyright infringement during this process.)
But in an effort to speed up the development of new drugs, vaccines and other useful biological materials, researchers are beginning to develop similar AI systems that can generate new protein designs. Biologists say such technology could also help attackers design biological weapons, but they point out that actually making the weapons would require a multimillion-dollar laboratory, including manufacturing equipment of DNA.
“There are some risks that don’t require millions of dollars in infrastructure, but those risks are long-standing and not related to AI,” said Andrew White, a co-founder of the nonprofit Future House and one of the biologists . who signed the agreement.
Biologists have called for the development of security measures that would prevent the use of DNA manufacturing equipment with harmful materials – though it is unclear how those measures would work. They also called for safety and security checks of new AI models before they are released.
They do not argue that the technologies should be bottled up.
“These technologies should not be held by just a small number of people or organizations,” said Rama Ranganathan, a professor of biochemistry and molecular biology at the University of Chicago, who also signed the agreement. “The scientific community should be free to explore them and contribute to them.”