Policy Intelligence

AI CBRN Risks: Governance Lessons from the Most Dangerous Misuses of AI

Lessons learned from managing CBRN AI risks can offer a blueprint to help advance AI governance in other high-risk domains.

August 30, 2024
Author(s)
Raina Talwar Bhatia
Contributor(s)
Evi Fuelle
Lucía Gamboa
No items found.

In an interconnected world experiencing rapid technological progress, inadequate control over Chemical, Biological, Radiological, and Nuclear (CBRN) materials poses a significant risk for humans, animals, and the environment - and the rapid development of artificial intelligence (AI) adds new dimensions and possibilities to this risk landscape.  As demonstrated by COVID-19, CBRN threats, whether due to natural occurrence, accidental release, or deliberate misuse, have the potential to pose  systemic risk  to society, making the challenge of managing these risks ever more pressing.

Historically, CBRN materials were contained and regulated by governments and international agreements and treaties. However, AI has the potential to amplify the destructive capability of CBRN threats by democratizing access to threat-developing capabilities for both well-and ill-intentioned actors. Researchers have already shown the potential dangers  of AI misuse in this field, underscoring the critical need for strong governance and global cooperation to address these novel risks. 

The table below shows examples for each primary type of CBRN threat, highlighting the diverse range of risks associated with these hazards:

Background: CBRN Risks increase with AI

In 2022, a team of global scientists developed an AI tool for drug discovery optimization to find chemicals with the least toxicity and the least negative side effects on patients. However, after  adjusting the parameters to make the chemicals more toxic, the model used for this particular AI tool also  generated known chemicals that have been used as chemical weapons, as well as  new chemicals that were predicted to be more toxic than known chemical weapons.

In June 2023, as a class activity in MIT, students were instructed to ask a Large Language Model (LLM) to identify four viruses or bacteria that had the potential to cause the next pandemic.  The students were then able to extract further information from the LLM chatbot on how to develop these threats in a laboratory, source materials from suppliers, and other related details.

Most concerningly, as a result of governments considering the possibility of integrating autonomous AI systems into military systems, in January 2024, an exercise was conducted by researchers from GeorgiaTech, Stanford, and Northeastern University that examined how AI systems would react to military threats in fictional scenarios. The study used 5 LLM-based agents in  “war-gaming” scenarios, and in this exercise found that models were more likely to escalate conflict and in rare cases, advocated for the use of nuclear weapons.

As technological advances continue in both traditional machine-learning and the development of LLMs, , possible AI-related CBRN threats only continue to grow. Artificial intelligence is a tremendously powerful dual-use technology and thus, collaboration between academia, policymakers, and industry in CBRN-related fields is key to informing action and regulation moving forward.

Deep-Dive: U.S. Government Approach to CBRN Governance 

The Executive Order mandated the drifting of a report  from the Department of Homeland Security (DHS) published on 26th April, 2024, focused on understanding and mitigating AI misuse in the development or deployment of CBRN threats and a second report from the National Telecommunications and Information Administration (NTIA) published on July 30th, 2024. The NTIA report addressed the U.S. government's approach to managing dual-use foundation models—AI models that, due to their broad capabilities, could be modified to perform tasks that pose serious security risks, including lowering the barriers for non-experts to design, synthesize, or deploy CBRN weapons.

The DHS Report evaluates the risk of AI being misused to assist in development or use of

CBRN threats –with a particular focus on biological weapons. While high-level, it includes policy recommendations that can advance AI governance for high-risk AI overall, notably the need to:  

  • standardize practices and develop model guidelines to encourage adoption of consistent red-teaming exercises and encouraging third-party evaluations;
  • develop granular release practices for source code and AI model weights for biological and/or chemical specific foundation models and general-purpose biological or chemical design tools;
  • establish "safe harbor" reporting processes for vulnerabilities to protect sensitive data and developing criteria to protect sensitive data from public databases;
  • leverage mechanisms to promote information sharing and establish best practices and risk mitigation for AI technology development for CBRN-specific dual-use developments that could pose a national or economic security risk.

The July 2024 NTIA report complements the DHS findings focusing on the risks posed by the accessibility of AI model weights. It suggests that the U.S. government should consider restricting access to certain dual-use foundation models that could be exploited for harmful purposes. The NTIA emphasizes the need for continuous evaluation and research into AI model risks and encourages international cooperation to promote transparency and accountability in AI development, particularly regarding CBRN risks.

Potential Future Actions

The DHS and NTIA reports from the U.S. government are important steps toward understanding and mitigating the risks associated with AI and CBRN threats. Building on existing federal laws and regulations, with the recommendations made in these reports, is a practical approach for regulating AI. However, we need to be mindful of the limitations of traditional interagency coordination efforts when considering  rapidly advancing scientific fields like biotechnology. In particular, considering not just the names of bacteria or viruses, but also the symptoms, when regulating what can be researched in a life science laboratory typically. In a world of quickly evolving, accessible biotechnologies like CRISPR (a technology that scientists use to selectively modify the DNA of living organisms), and other similar advances in synthetic biology, existing laws and regulations similarly need to evolve to maintain security and reduce risks. 

While most enterprises may not have to worry about mitigating CBRN risks when developing or procuring AI, we can learn from the work that is already underway in order to prevent the widespread misuse of AI to enable the development or production of CBRN threats. The strategies to mitigate these dangers –such as interagency cooperation, standardized red-teaming practices, and careful consideration of AI model accessibility– can and should inform governance approaches for other high-risk AI applications.

Conclusion

The need for developing detailed safeguards in AI governance, especially in existential areas such as CBRN, is similar to the need for strong locks on doors. While red-teaming exercises and existing regulations are useful in providing some security, they are akin to a sturdy door that will deter casual threats, but not determined intruders. AI developers and deployers, both big and small, need to develop the strong, reinforced ‘locks’ that make their products safe and limit their ability to cause harm in CBRN or other areas.

The most significant innovations in AI often come from sectors where physical safety and security are paramount, such as defense and healthcare. By creating and applying rigorous standards for critical-risk AI systems, governments internationally can set standards that ensure that AI enhances society without compromising security. 

It must be recognized that limitations of existing systems are inherent to all high-risk AI technologies. The lessons learned in managing these specific risks can offer a blueprint for working with similar concerns in other domains, such as cybersecurity applications of AI, generation of harmful content like disinformation or deep fakes, and observation applications of AI (such as medicine or transport). 

As governments work to determine future strategies to address the intersection of AI and CBRN threats, it is critical that they recognize that regulation requires not just risk identification, but also proactive and adaptive governance that evolves with technological advancements.

DISCLAIMER. The information we provide here is for informational purposes only and is not intended in any way to represent legal advice or a legal opinion that you can rely on. It is your sole responsibility to consult an attorney to resolve any legal issues related to this information.