Eight specialized research laboratories advancing the frontiers of AI safety, governance, and security. Our mission: develop frameworks, tools, and knowledge to ensure transformative AI technologies serve humanity safely and securely.
Red teaming, adversarial attack simulation, robustness evaluation, and defense mechanisms for AI systems across multiple domains.
Current Projects
Adversarial example generation for defense AI
Certified robustness benchmarks
Transfer attack evaluation frameworks
Adversarial training at scale
Funding: DARPA AI Next, ONR, NSF CAREER
Byzantine Fault Tolerance Lab
BFT
Lab Director: Dr. Dmitri Volkov
14
Researchers
38
Publications
Distributed AI consensus mechanisms, Byzantine Council simulation with 33-agent deployments, multi-party consensus for aligned AI systems.
Current Projects
Byzantine consensus for LLM governance
33-agent Byzantine Council prototypes
Scalable distributed agreement protocols
Fault tolerance in multi-stakeholder systems
Funding: ARPA-H, Anthropic Safety, BMCC Cyber
Autonomous Systems Ethics Lab
ASEL
Lab Director: Dr. Priya Sharma
12
Researchers
31
Publications
Governance frameworks for autonomous vehicles, drones, robotics, and embodied AI systems. Human oversight, ethical decision-making architectures.
Current Projects
Autonomous vehicle governance frameworks
Drone regulation policy development
Robotics ethics in defense contexts
Meaningful human control architectures
Funding: NSF, NHTSA, DoD Research
Quantum-AI Security Lab
QASL
Lab Director: Dr. Kenji Yamamoto
15
Researchers
41
Publications
Post-quantum cryptography for AI systems, quantum-resistant protocols, collaboration with Orbit-Q on quantum defense solutions.
Current Projects
Post-quantum AI authentication
Quantum-resistant neural networks
Lattice-based cryptography for ML
Defense against quantum threats
Funding: NSF PQC, NIST, Orbit-Q Partnership
Natural Language Governance Lab
NLGL
Lab Director: Dr. Aisha Patel
18
Researchers
47
Publications
LLM safety, alignment research, prompt injection defense, jailbreak evaluation, and governance frameworks for large language models in critical applications.
Current Projects
Prompt injection detection and mitigation
Jailbreak evaluation benchmarks
Constitutional AI for governance
LLM alignment verification
Funding: Anthropic, OpenAI Safety, NIST AI RMF
Computer Vision Ethics Lab
CVEL
Lab Director: Dr. Marcus Liu
13
Researchers
35
Publications
Facial recognition governance, surveillance AI ethics, bias in vision systems, and policy frameworks for responsible computer vision deployment.
Current Projects
Facial recognition policy frameworks
Vision system bias evaluation
Surveillance ethics governance
Privacy-preserving computer vision
Funding: DARPA SemaFor, Privacy Tech Consortium
Federated Learning Privacy Lab
FLPL
Lab Director: Dr. Sarah Goldstein
14
Researchers
43
Publications
Distributed machine learning privacy, differential privacy techniques, secure multi-party computation, and privacy-preserving AI for defense applications.
Current Projects
Differential privacy implementation
Federated learning for defense
Privacy budget management
Secure computation frameworks
Funding: DARPA AI Next, NSF, ONR
Explainable AI Lab
XAIL
Lab Director: Dr. Robert Chen
17
Researchers
53
Publications
AI interpretability, transparency frameworks, decision auditing, and explainability techniques for high-stakes military and defense AI systems.
Current Projects
Neural network interpretability methods
Decision audit frameworks
Explainability for defense AI
Human-AI collaboration transparency
Funding: DARPA XAI, DoD Science, NSF
How We Work
Research Collaboration & Impact
Interdisciplinary Integration
Our labs collaborate across computer science, policy, economics, and defense expertise, producing comprehensive solutions that bridge research and real-world implementation.
Industry Partnerships
We partner with leading technology companies for real-world testing, implementation validation, and emerging technology assessment in production environments.
Government Engagement
Direct collaboration with defense ministries and security agencies ensures our research translates into actionable policy and strategic capability development.
Academic Network
Partnerships with leading universities enable knowledge sharing, talent recruitment, and access to cutting-edge research infrastructure globally.
Explore More
Related Resources
Publications
Explore our 340+ peer-reviewed publications from leading journals and conferences in AI safety and governance research.