In January 2026, Las Vegas police and the FBI raided a suburban home and found multiple refrigerators, freezers, laboratory equipment, and “numerous bottles… containing unknown liquid substances.” The illegal biolab was connected to a similar operation discovered in Reedley, California roughly three years earlier. The Reedley facility contained thousands of vials of biological material, including potential pathogens such as HIV, malaria, tuberculosis, COVID-19, and Ebola; both labs were run by the same individual. Despite knowing about potential Las Vegas connections since 2023, it took a separate tip three years later to uncover the second operation.

Biorisk is often discussed in hypothetical terms, with historical case studies pointing mostly towards failures and bottlenecks. But the Las Vegas biolab raid and its connection to the 2022 Reedley discovery show that dangerous biological operations could already be– and indeed, were– happening under the radar across multiple states.

Historically, technical knowledge has been the limiting factor. When Aum Shinrikyo tried to weaponize anthrax in 1993, they had motivation and funding but lacked expertise– wrong bacterial strain, poor concentrations, equipment failures. Knowledge gaps killed their program. AI is quickly eroding that constraint. Publicly available models like o3 already outperform 94% of virology experts on laboratory protocol questions, even on questions directly relevant to the experts’ specialties. That benchmark performance hasn’t yet translated cleanly into end-to-end weapons development guidance, but the trajectory is clear. Models are advancing toward the ability to provide expert-level virological guidance on demand, anonymously, and at scale. Anthropic’s own internal bioweapons acquisition uplift trials found that Claude Opus 4 enhanced human performance by 2.53x on relevant tasks, enough to trigger activation of AI Safety Level 3.

For closed-weight models, this risk is at least partially manageable. Input-output classifiers, safety fine-tuning, and know-your-customer requirements can intercept most non-sophisticated actors before they get actionable guidance. These mitigations are imperfect– models remain vulnerable to jailbreaks, and red-teaming studies have repeatedly demonstrated that they can be coaxed into providing dangerous guidance– but they represent a meaningful layer of friction. Open-weight models have no such layer. Once released, they can be downloaded, stripped of safety fine-tuning, and run locally with no oversight, no logging, and no recourse.

The best open-weight models in the world are Chinese (here are our best guesses for why Chinese AI companies open-weight their models).

That leaves us the question: if Chinese open-weight models help rogue actors to build bioweapons, and we deem that an unacceptable risk, what could the US do? We think there are four broad classes of intervention:

Not all of the interventions proposed here should be implemented by default. The more adversarial recommendations should be (1) reserved for labs that release frontier open-weight models without adequate safeguards,iTechniques like pre-training data filtering and alignment pre-training, while not foolproof, can meaningfully reduce the biosecurity risk of open-weight release. Labs that invest in these measures and demonstrate their effectiveness should not be treated the same as labs that release raw capabilities with no mitigations. and (2) conditional on dual-use evaluations by the U.S. government concluding that open weight models pose unacceptable risk.

To that end, we also recommend that the U.S. government fund research into robust safeguards for open-weight models, so that the technical alternatives to restriction exist. If open-weight models can be made safe enough, the restrictive interventions described in this paper become unnecessary. The interventions should be understood as buying time for that safety research to succeed, rather than as advocating for a permanent end to open-weight models.

Author’s Note: Follow-up work on mitigating adverse effects
Many of the interventions proposed here impose costs on legitimate users. Startups building on Chinese open-weight models face disruption if those models can’t be served by cloud providers, and academic researchers could lose access to weights they use for interpretability work and safety research. Whether these costs actually materialize depends on two open questions: how quickly robust safeguards for open-weight models can be developed (see Section 2.1), and whether dual-use evaluations conclude that current open-weight models cross the threshold of unacceptable risk.

If safeguards advance fast enough, the restrictive interventions in this paper never need to be triggered. If they’re triggered, they should be explicitly temporary, meaning tied to capability thresholds and revoked once adequate safeguards exist. But in the interim, if restrictions are implemented, their costs should be actively mitigated. We intend to explore specific measures– including research exemptions for academic use and safe harbors for existing deployments during a defined transition period– in a subsequent publication.

1 Techniques like pre-training data filtering and alignment pre-training, while not foolproof, can meaningfully reduce the biosecurity risk of open-weight release. Labs that invest in these measures and demonstrate their effectiveness should not be treated the same as labs that release raw capabilities with no mitigations.

1.1 Communicate biosecurity risks through bilateral diplomacy

Background: Chinese policymakers are already aware of AI-enabled biorisk. Their AI Safety Governance Framework 2.0 explicitly identifies CBRN weapon misuse as a serious emerging threat, demonstrating that concerns about AI enabling bioweapons development have already reached senior advisors within the CCP. This creates an opening for direct engagement.

Moreover, China has not fully escaped international scrutiny over COVID-19’s origins; questions about the Wuhan Institute of Virology and biosafety protocols remain politically sensitive. A bioterrorism incident– even a failed attempt– traced to Chinese AI models would trigger immediate international backlash, leading to renewed accusations about lax oversight and validation of critics who claim Chinese technology creates global risks.

The CCP has legitimate interests in preventing this scenario. Framework 2.0’s inclusion of CBRN risks signals awareness; diplomatic engagement should emphasize that open-weighting makes those risks unmanageable– since unlike closed models where usage can be monitored and suspicious patterns flagged, open weights eliminate any possibility of intervention between AI guidance and physical harm.

The Intervention: Make the case directly to Chinese policymakers that open-weight release specifically is incompatible with their own biosecurity goals through diplomatic channels, academic back-channels, and policy publications widely read by officials in the CCP.

Recommendations

State Department (Bureau of Emerging Threats)

  • Propose a US-China AI Biosecurity Working Group under the existing bilateral AI dialogue, coordinated through the Bureau of Emerging Threats’ Office of Disruptive Technology, with a joint risk assessment deliverable within 6 months

Executive Branch

  • Send a demand signal via the NSC/intelligence community to direct the intelligence community to prioritize collection on terrorist organizations’ potential AI adoption, with particular attention to use of Chinese open-weight models by designated foreign terrorist organizations (see 1.2, 2.4)

Department of War (Defense Threat Reduction Agency– DTRA)

  • Commission a narrowly scoped DTRA assessment focused specifically on the biosecurity uplift delta between open-weight and closed-weight models, quantifying how much additional risk open-weight access creates compared to what closed API-served models enable
    • The assessment should use a bounded-capability adversary model, specifying constraints on compute, fine-tuning datasets, and assumed domain expertise for plausible threat actor classes (e.g., an individual with consumer hardware versus a state-sponsored program with datacenter access)
    • Key variables may include: the uplift from abliteration (removing safety fine-tuning, which requires minimal technical sophistication for open-weight models); the uplift from domain-specific fine-tuning on publicly available bioweapons-relevant literature; and the capability ceiling of the most powerful open-weight models currently available

Academic & Track II

  • Engage the China Academy of Information and Communications Technology (CAICT), the MIIT-affiliated research institute that developed China’s first industry standards for evaluating generative AI products
    • CAICT’s assessment criteria feed into the CAC’s mandatory model registration process, making it the most direct channel for integrating biosecurity considerations into China’s existing pre-release evaluation framework
  • Introduce open-weight biosecurity risk as a formal agenda item within ITU and ISO standardization processes, where Chinese regulators already hold significant seats
  • Fund a joint NAS–Chinese Academy of Sciences research project on AI-enabled biosecurity risks from open-weight models, with research funding through NIH
  • Commission analysis (through institutions like CSET, RAND, or the Brookings-Tsinghua Center) in Chinese-policymaker-facing outlets (China Quarterly, The Diplomat), co-authored with Chinese-affiliated researchers where possible

1.2 Publicize misuse to generate international pressure

Background: Following the assassination of Iran’s supreme leader Ayatollah Khamenei in February 2026, terrorism experts warn of increased terrorism risks from both state-sponsored Iranian networks and radicalized individuals, with concerns that Shia extremists could commit attacks in support of Iran. If such actors were discovered using open-weight models to plan or execute attacks, this would provide concrete evidence to demonstrate that open-weight models create immediate security threats.

The Intervention: Investigative journalism or intelligence reports demonstrating that open-source models are actively being used by terrorist organizations could generate sufficient international backlash to pressure the CCP into committing to stop open-weighting.

Recommendations

Intelligence Community & NSC

  • Prioritize collection on terrorist organizations’ AI adoption with particular attention to use of open-weight models by designated foreign terrorist organizations (see 1.1, 2.4)
  • Establish a pre-coordinated rapid declassification pipeline for AI-misuse intelligence, modeled on the Biden administration’s preemptive intelligence releases before the Ukraine invasion

Investigative & OSINT Infrastructure

  • Ensure declassified intelligence reaches credible investigative outlets through established channels
  • Commission an external research organization (Middlebury Institute Center on Terrorism, Extremism, and Counterterrorism, or the Global Network on Extremism and Technology) to continuously monitor extremist forums and dark web spaces for evidence of open-weight model use

2.1 Fund research into robust safeguards for open-weight models

Background: If open-weight models can be made safe enough that release no longer creates unacceptable biosecurity risk, then restrictions on their release become unnecessary. This is the ideal situation given the drawbacks associated with stopping open-weight releases (detailed in this post).

Techniques like pre-training data filtering and alignment pre-training show promise, but are under-researched; there is currently no open-weight safeguard that is both foolproof and practically deployable by labs.

This paper, titled “Open Technical Problems in Open-Weight AI Model Risk Management,” provides the most comprehensive current map of the research frontier and identifies 16 open problems that need further research.

The Intervention: Direct substantial, sustained funding toward the specific problem of making open-weight release safe. Currently, most AI safety research funding flows toward closed-model alignment work (making frontier lab models less susceptible to misuse through their APIs, e.g. input-output classifiers). This work is valuable, but largely irrelevant to the open-weight biosecurity problem; a model that is well-aligned at the API level provides no protection once its weights are downloaded and its safety fine-tuning is stripped.

The research agenda required to make open-weight release safe is distinct from the agenda currently receiving the most attention and funding, and that gap is not self-correcting, because closed labs have limited commercial incentive to invest in safety techniques that primarily benefit their open-weight competitors.

Ideal Implementing Party: U.S. government funding agencies (NSF, DARPA) for foundational research, complemented by private philanthropic foundations (Coefficient Giving, Longview) for grants that can be deployed more quickly. Both should fund research that is focused on safeguards that are robust to post-release modification and published openly, so that the results are available to labs worldwide– including Chinese labs that might adopt them voluntarily.


2.2 Implement conditional bans on Chinese open-weight models

Restrictions apply only above defined capability thresholds, as assessed by designated U.S. agencies using standardized dual-use capability evaluations.

A note on scope: This intervention does not directly prevent the paper’s core threat scenario; a terrorist downloading weights to run locally will not be deterred by a commercial deployment ban. The mechanism is indirect: Chinese labs often open-source in part because ecosystem adoption in Western markets generates developer loyalty, tooling investment, and cloud revenue. If that adoption is banned in the U.S. and allied countries, the strategic payoff of open-sourcing shrinks, which changes the calculus for future releases. This is a pressure measure aimed at the lab’s incentive structure, rather than a security measure aimed at the end user. For interventions targeting the end user directly, see Section 2.3 and all of Section 4.

Background: Several major bills already propose restricting Chinese AI models in the U.S. (e.g. the No DeepSeek on Government Devices Act, the No Adversarial AI Act, and the Decoupling America’s Artificial Intelligence Capabilities from China Act). However, none distinguish between open-weight and closed-weight releases, which is the distinction most relevant for assessing biosecurity risk.

The Intervention: The U.S. and allied governments2Non-U.S. governments have independent data sovereignty incentives to prefer open-weight models run on domestic infrastructure. Allied coordination should focus on shared safety certification standards, allowing allied governments to adopt frameworks consistent with their own legal and sovereignty contexts rather than pushing for equivalent restrictions to what the U.S. implements. could restrict commercial use and deployment of Chinese open-weight models on national security grounds, while explicitly exempting closed Chinese models that meet defined safety criteria.3This exemption addresses the biosecurity risk specifically; broader national security concerns around data security and espionage in closed Chinese AI services remain, and should be managed through existing cybersecurity frameworks/network security policies. This targets the commercial ecosystem, not individuals. Weights already released remain downloadable.

Recommendations

Department of Commerce (BIS)

  • Prohibit U.S. cloud providers (Amazon Web Services, Microsoft Azure, Google Cloud) from hosting, serving, or otherwise enabling access to designated models through managed AI services using IEEPA transaction-blocking authority
  • Establish a “safe harbor” framework for closed Chinese models that meet defined safety criteria (e.g. monitored API access, CBRN query refusal) to be explicitly exempt from distribution restrictions
    • Chinese labs that keep weights closed and demonstrate adherence to these criteria retain access to U.S. companies

State Department

  • Encourage allied adoption of similar restrictions through the Group of Seven and AUKUS frameworks

2 Note: Non-U.S. governments have independent data sovereignty incentives to prefer open-weight models run on domestic infrastructure. For this reason, allied coordination should focus on shared safety certification standards, allowing allied governments to adopt frameworks consistent with their own legal and sovereignty contexts rather than pushing for equivalent restrictions to what the U.S. implements.

3 This exemption addresses the biosecurity risk specifically; broader national security concerns around data security and espionage in closed Chinese AI services remain, and should be managed through existing cybersecurity frameworks/network security policies.


2.3 Reduce individual adoption of Chinese AI through public risk communication

Background: 2.2 focuses on commercial use. However, the individual developer downloading Chinese open-weight models to run locally is beyond the reach of cloud providers, and attempting to ban individual access outright is largely unenforceable.

The most promising lever here is likely to be trust. Chinese technology companies have a well-documented history of security negligence and undisclosed data sharing with state-linked entities– patterns that extend directly to Chinese AI products. DeepSeek’s own track record illustrates this: security researchers at Wiz discovered a completely unauthenticated database exposing over a million lines of chat history and API keys; SecurityScorecard’s STRIKE team identified undisclosed data transmissions to Chinese state-linked entities along with ByteDance-owned libraries embedded in the codebase; and an independent audit by NowSecure found the iOS app transmitted user data without proper encryption using deprecated cryptography with hardcoded keys.

These security and data privacy failures from the highest-profile Chinese AI company in the world are consistent with a broader pattern that includes TikTok’s data practices, the FCC’s ban on Huawei and ZTE equipment as a national security risk, and the FCC’s investigation of China Mobile’s U.S. operations over concerns regarding government control and data access.

Running models locally avoids data exfiltration risks. However, when a developer or enterprise adopts a Chinese open-weight model, they’re making a long-term dependency decision. The security culture visible in a lab’s cloud products is the best available signal for how that organization handles the aspects of model development that users can’t observe, including training data provenance, pre-release evaluation rigor, and responsiveness to discovered vulnerabilities. Labs that fail at the basics of API security are not labs you want as a foundational dependency.

A coordinated public communications strategy that makes these risks clear to individual developers and researchers, situated within the broader context of Chinese tech trustworthiness, could reduce adoption without requiring enforcement.

Recommendations

CISA & NSC

  • Issue public advisories– modeled on existing CISA cybersecurity advisories– warning individual users about documented security risks of Chinese AI products
  • Brief enterprise CISOs and CTOs through existing government-industry channels (CISA’s Joint Cyber Defense Collaborative, sector-specific ISACs) on risks of Chinese AI adoption

NIST

  • Publish a framework for evaluating AI model trustworthiness, covering training data transparency, corporate governance structure, data jurisdiction, regulatory environment, security track record, etc

State Department

  • Coordinate allied messaging through Five Eyes and G7, ensuring equivalent public advisories from UK, EU, Japan, and Australia

Other

  • Support independent security researchers publishing technical audits of Chinese AI products
    • Will generate organic media coverage and developer community discussion
  • Encourage industry-led voluntary commitments from U.S. tech companies to disclose Chinese AI dependencies in their supply chains, similar to software bill of materials (SBOM) requirements

2.4 Sanction individuals and designate companies under counterterrorism authorities

Background: Executive Order 13224, originally signed after 9/11, blocks assets of entities that “commit, threaten to commit, or support terrorism.” The authorities under EO 13224 have been applied expansively in recent years, including to non-state actors such as transnational criminal organizations.

The Intervention: If open-source models are demonstrably used in a terrorist attack or weapons development, the U.S. could consider designating responsible entities under Executive Order 13224 where there is a demonstrable and substantial nexus between the model release and a terrorist act.

Individual sanctions would freeze personal assets, bar travel, and make any financial institution that does business with the designated person a target for secondary sanctions. For a lab CEO or CTO, this means: you personally cannot hold a U.S. bank account, use a Visa card, fly through an allied country, or transact with any entity that wants to keep its access to the U.S. financial system.

The threshold for designation is high, and premature use could provoke serious diplomatic backlash. But the credible threat of designation may be enough to stop labs from releasing open-source models.

Recommendations

Treasury (OFAC) & State Department (Bureau of Counterterrorism)

  • Designation requires a factual nexus between the entity and a terrorist act; the evidentiary case would likely involve intelligence agencies documenting the use of a specific open-weight model in a specific attack or credible plot

This pairs directly with the intelligence collection and declassification pipeline described in 1.2– that section builds the evidence base; this section is an extension of what you can do with it.

3.1 Promote open-weight safety practices through academic channels

Background: If labs are planning to open-source frontier models regardless, the next best outcome is that the models they release have robust safety measures baked in at the pre-training level.

Techniques like pre-training data filtering and alignment pre-training show promising early results for reducing the biosecurity risk of open-weight models, though no researched methods up to this point are fully robust (see Section 2.1). Even so, no Chinese lab has adopted any of these techniques.

The Intervention: Use existing academic channels and conference networks to promote adoption of open-weight safety practices.

Researcher-to-researcher dialogue already happens organically; the major ML conferences ensure that the relevant people are in the same rooms every year. A targeted conversation backed by a compelling technical demonstration of what a safety-stripped open-weight model can produce, paired with evidence that pre-training safeguards can prevent misuse without degrading general capabilities, may be sufficient to shift a lab’s calculus.

Ideal Implementing Party: AI safety researchers and biosecurity experts with existing relationships to Chinese AI labs, operating through academic channels and conference networks rather than formal diplomatic channels.


3.2 Offer conditional compute access in exchange for closed weights

Background: In December 2025, the Trump administration announced it would allow Nvidia to export H200 chips to China, with formal regulations issued in January 2026 permitting sales to approved customers under security conditions. These decisions demonstrate the administration’s willingness to selectively adjust export controls on a case-by-case basis; a natural next step is to introduce explicit conditionality into licensing decisions, where labs that release frontier open-weight models face an increased likelihood of license denial or revocation.

The Intervention: The US could use export control licenses as direct leverage, conditioning access to advanced compute on model release practices, including favoring developers that keep frontier model weights closed. This alters the incentive structure around open-weight releases by linking them to advanced compute access (see #1 on this article).

Note: Conditional on continued export of advanced chips to China, this intervention should be implemented; however, it is probably better not to export any.

Recommendations

Department of Commerce (BIS)

  • Amend existing export control licenses to state that a company’s decision to publicly release frontier model weights should be treated as a significant negative factor in licensing decisions, including potential denial or revocation of access to advanced chips
    • Note: Chinese labs could trivially game the system by maintaining separate entities (one closed to preserve chip access and one open-sourcing under a different name). BIS should define “affiliated entity” broadly in license terms– covering subsidiaries, shared investors, shared personnel, and shared compute infrastructure– to reduce the effectiveness of this workaround.
  • Finalize pending KYC rules for U.S. cloud providers serving foreign clients, and extend the conditionality framework above to cloud compute access so that entities affiliated with labs that release designated frontier models without adequate safeguards face denial or revocation of large-scale compute purchases from U.S. providers

State Department

  • Coordinate with allied semiconductor-adjacent governments (Netherlands/ASML, Japan/Tokyo Electron, South Korea/Samsung, etc) to encourage similar conditionality on their equipment exports

Highly capable open-source models have already been downloaded millions of times and distributed globally; we cannot un-release them. Even if every Chinese lab stopped open-sourcing tomorrow, DeepSeek R1, Qwen, Kimi k2.5, and other capable models already out there can meaningfully uplift some stages of bioweapon development. Therefore, even as we work to prevent future releases through Levels 1-3, we must simultaneously harden the biological supply chain against models already in circulation.

Level 4 interventions complement rather than replace efforts to stop new releases. While Levels 1-3 target the source (preventing Chinese and Western labs from open-sourcing future models), Level 4 creates barriers at the point where AI guidance must translate into physical biological materials. These interventions work regardless of who releases what, since they target the supply chain itself.

Defense requires both strategies working in parallel: stopping new, more capable models from being released (which would further lower barriers and enable more sophisticated attacks) while simultaneously creating friction at every stage where existing AI capabilities must interface with the physical world.

4.1 Mandate AI-enabled DNA synthesis screening

Background: Currently, there exists no universal legal requirement for gene synthesis providers to conduct background checks on clients or screen DNA sequences to ensure they’re not dangerous pathogens. While some labs screen orders on a voluntary basis as a condition of membership in organizations like the International Gene Synthesis Consortium, compliance remains optional and inconsistent. In 2006, an investigative journalist with the Guardian was able to mail order a “modified sequence of smallpox DNA,” and their order was not screened by the provider since it was less than 100 nucleotides long.

Even in the United States, no binding legal requirements exist for DNA synthesis screening. Federal regulations were proposed in 2024 through the Framework for Nucleic Acid Synthesis Screening, but an Executive Order in May 2025 paused implementation, and no replacement framework has been issued. Moreover, despite several congressional bills attempting to mandate screening, none have passed. This regulatory gap means anyone, including those with malicious intent, can order potentially dangerous genetic sequences with minimal oversight.

The Intervention: Existing screening approaches rely on sequence homology, which means matching orders against databases of known dangerous pathogens. An AI-enabled bioterrorist could circumvent this by designing functionally equivalent pathogens using synonymous codons, chimeric sequences, or entirely novel genetic constructs that retain lethality while evading database matches.

To address this issue, we need to implement advanced AI-powered screening that would analyze predicted protein function and evolutionary markers to flag potentially dangerous sequences.

Implementation requires two components: First, we must develop reliable AI screening systems capable of detecting novel pathogenic sequences that the world has never seen before; second, we must require all commercial DNA synthesis providers globally to implement this screening as a condition for legal operation.

Recommendations

Department of Health and Human Services / OSTP
(immediate actions while waiting for legislation from Congress)

  • Finalize the paused Framework for Nucleic Acid Synthesis Screening with a phased screening mandate:
    • Phase 1 (immediate): require sequence homology screening against databases of known dangerous pathogens, using existing methods
    • Phase 2 (triggered once a validated AI screening tool is available– see Congress section below): require screening that integrates AI-powered functional analysis
      • Organizations like IBBIS and Fourth Eon Bio are already working on improved screening tools and international standards; Phase 2 should build on and accelerate this existing work
  • Tie federal research funding to compliance: any institution receiving NIH, NSF, or DARPA grants must procure synthetic nucleic acids exclusively from providers that meet the updated screening standards
    • This was already proposed in the 2024 framework and creates immediate market pressure without requiring new legislation

Congress

  • Pass S.3741 with these proposed amendments

International coordination for synthesis screening should be pursued through the multilateral channels described in Section 1, including the BWC and Australia Group frameworks.


4.2 Implement know-your-customer requirements for cloud laboratories and contract research organizations

Background: The standard objection to AI-driven bioweapons risk is that knowledge alone is not enough– you still need hands-on laboratory skills, the “tacit knowledge” that can only be acquired through years of physical practice (hence Active Site’s uplift study). Knowing how to culture a pathogen is different from being able to do it reliably. This barrier has historically been one of the strongest defenses against non-state bioweapons development.

Unfortunately, cloud laboratories threaten to significantly erode this barrier. Services like Emerald Cloud Lab allow anyone to design experiments in software and have them executed by robotic systems in a physical facility, remotely, without ever entering a lab. ECL requires no coding experience; internal estimates suggest relatively short onboarding periods for novice users. An AI system that can design a bioweapons protocol and a cloud lab that can execute it are, individually, semi-manageable risks; together, they could materially increase the risk of misuse.

Despite this, cloud labs currently operate with no standardized customer screening. Contract research organizations (CROs) create a similar vulnerability. CROs provide specialized research services and can help their clients with everything from compound synthesis to biological assays. A malicious actor could potentially decompose a bioweapons development project into seemingly innocuous components and outsource them to different CROs, each unaware of the larger program.

The Intervention: Require all cloud laboratory providers and contract research organizations to implement know-your-customer screening before granting access to experiment execution as a condition of legal operation. Providers should be required to log all experimental workflows and flag protocols involving select agents or sequences of concern, with automated screening that mirrors (and integrates with) the DNA synthesis screening proposed in 4.1. Also establish RAND’s proposed Cloud Lab Security Consortium modeled on the IGSC.

Recommendations

Department of Health and Human Services

  • Issue rulemaking requiring all U.S.-based cloud laboratory providers and CROs to verify identity, institutional affiliation, and stated research purpose of every user before granting access to experiment execution
    • Model this on existing Select Agent Program registration requirements
  • Require cloud lab providers and CROs to log all experimental workflows and flag protocols involving select agents, sequences of concern, or pathogen-adjacent procedures, with automated screening that integrates with the DNA synthesis screening framework proposed in 4.1

NIST (National Institute of Standards and Technology)

  • Develop standardized KYC and biosecurity screening standards specifically for cloud laboratories and CROs, defining:
    • Minimum identity verification thresholds
    • Prohibited experiment categories
    • Escalation procedures when flagged protocols are detected

Federal Funding Agencies (National Institutes of Health, National Science Foundation, DARPA)

  • Condition federal research grants on use of cloud labs and CROs that implement baseline KYC screening, using interim agency guidance while NIST develops formal standards (the same procurement-based lever proposed for DNA synthesis screening in 4.1)
    • For CROs specifically, this includes Chinese and Indian providers that cannot demonstrate equivalent customer verification standards; effectively creates a prohibited suppliers list of non-compliant overseas providers
    • Creates immediate market pressure before binding regulation is finalized

Congress

  • Pass legislation mandating KYC screening for all U.S.-based cloud laboratory providers and CROs as a condition of legal operation, regardless of whether they serve federally funded researchers

International Coordination

  • Support the development of a Cloud Lab Security Consortium (proposed by RAND), modeled on the International Gene Synthesis Consortium
  • Work with allied governments to standardize CRO KYC requirements, reducing the regulatory arbitrage advantage of non-compliant overseas providers

(!) Note: The KYC framework described here for cloud labs should also extend to DNA synthesis providers, which currently face no standardized customer screening requirements either.