
Dario Amodei, chief executive of the high-profile A.I. start-up Anthropic, warned Congress that new A.I. technology could enable unskilled individuals to unleash large-scale biological attacks like viruses or toxic substances causing widespread disease and death.
Both Democratic and Republican senators expressed concern while A.I. researchers discussed the seriousness of the threat.
Now, a group of over 90 biologists and scientists specializing in A.I. for protein design have signed an agreement to ensure that their research using A.I. technologies proceeds without posing a risk to the world.
These biologists, including Nobel laureate Frances Arnold, believe that the benefits of these technologies outweigh the potential harm, leading to advancements in vaccines and medicines.
The agreement states, “As scientists in this field, we believe that the benefits of current A.I. technologies for protein design far surpass the risks, and we want to ensure that our research continues to be beneficial for everyone in the future.”
The biologists do not seek to restrict A.I. technology development but aim to regulate the equipment necessary for manufacturing new genetic material.
Dr. David Baker, from the University of Washington, explained that DNA manufacturing equipment is crucial for creating bioweapons, making it necessary to regulate its use.
He noted, “Protein design is just the start; synthesizing DNA and transitioning designs from computers to reality is where regulation should occur.”
This agreement is part of numerous endeavors to balance the risks of A.I. with its potential benefits. Amid concerns that A.I. could facilitate the spread of misinformation, rapidly replace jobs, or even pose a threat to humanity, various entities are working to understand and address these risks.
Anthropic’s Dario Amodei builds large language models and warned Congress about potential bioweapon development with A.I. technology.
However, he clarified that creating bioweapons using current technology was not feasible yet.
While concerns persist about the future when advanced technologies might enable serious threats, current large language models, like ChatGPT by OpenAI, are not significantly more dangerous than search engines.
Researchers are exploring A.I. systems for protein design to expedite new medicines and vaccines, which could potentially aid attackers in designing bioweapons requiring sophisticated equipment and infrastructure.
Biologists emphasize the need for security measures to prevent the misuse of DNA manufacturing equipment and suggest safety reviews for new A.I. models before deployment.
They advocate for open exploration and contributions to these technologies by the scientific community, rather than restricting them to a few individuals or organizations.