Over 340 peer-reviewed papers advancing AI safety, governance, and defense security. Our research spans leading journals and conferences, advancing understanding of responsible AI development and deployment at scale.
Adversarial Robustness in Military AI Systems: A Comprehensive Framework for Defense
Sokolov, E., Mitchell, J., Chen, M., Yamamoto, K.
IEEE Transactions on AI & Defense (Special Issue on AI Robustness)
This landmark study presents the first comprehensive framework for evaluating and improving adversarial robustness in military AI systems. Through analysis of 500+ attack scenarios and 15 defense mechanisms, we provide actionable guidelines for deployment in high-stakes defense contexts. Our certified robustness methods have been adopted by three allied defense ministries.
Byzantine ConsensusLLM SafetyGovernanceDistributed Systems
2024
Autonomous Systems Ethics: Frameworks for Meaningful Human Control
Sharma, P., Liu, M., Chen, M., Patel, R.
Proceedings of ACM FAccT '24 (Fairness, Accountability, and Transparency)
Presents comprehensive ethical frameworks ensuring humans maintain meaningful control over autonomous systems in defense contexts. Empirical study across 8 military organizations shows framework adoption increases operational confidence by 78% while maintaining tactical flexibility.
Post-Quantum Cryptography for AI Systems: Lattice-Based Protocols
Yamamoto, K., Chen, M., Volkov, D., Sokolov, E.
Journal of Cryptographic Engineering
Develops quantum-resistant cryptographic protocols specifically designed for neural network authentication and secure aggregation in federated learning. Demonstrates computational overhead reduction of 40% compared to previous approaches while maintaining 256-bit quantum security.
LLM Jailbreak Evaluation: Benchmarks and Defense Mechanisms
Patel, A., Sokolov, E., Goldstein, S., Liu, M.
Proceedings of NeurIPS 2024 (Safety & Security Track)
Introduces LLMDefense benchmark containing 5000 adversarial prompts targeting military AI systems. Evaluates 12 state-of-the-art defense mechanisms, revealing critical gaps in existing approaches. Proposes new constitutional AI framework achieving 96% defense rate against known attacks.
Facial Recognition Governance: Policy Frameworks for Democratic Societies
Liu, M., Sharma, P., Chen, M., Hoffmann, C.
International Journal of AI and Law
Comparative analysis of facial recognition governance across 15 nations. Identifies policy divergences and proposes harmonized framework balancing security, privacy, and civil liberties. Adopted by EU Commission for AI Act implementation.
Differential Privacy in Federated Learning: Practical Implementations
Goldstein, S., Volkov, D., Chen, M., Patel, A.
ICML 2023 (International Conference on Machine Learning)
Develops practical differential privacy mechanisms for federated learning in defense applications. Reduces privacy budget consumption by 50% through novel gradient clipping strategies. Demonstrates deployment across 200+ government agencies.
Explainable AI for Military Decision Support: Interpretability Standards
Chen, R., Sokolov, E., Patel, A., Liu, M.
IEEE Transactions on Emerging Topics in Computing
Establishes interpretability standards for AI systems in military command and control. Proposes metrics for explanation quality and human understanding. Implemented in 12 NATO countries' defense systems.
AI Governance Models: Comparative Analysis Across Defense Nations
Al-Rashid, H., Mitchell, J., Hoffmann, C., Sokolov, E.
Proceedings of AIES 2023 (AI Ethics and Society)
Comprehensive comparison of AI governance approaches across 20 leading defense nations. Identifies convergence on core safety principles while respecting regulatory diversity. Proposes international AI governance harmonization pathway.
Certified Adversarial Robustness: Provable Defenses for Neural Networks
Sokolov, E., Chen, M., Volkov, D., Liu, M.
Journal of Machine Learning Research
Develops certified defense mechanisms providing formal guarantees on adversarial robustness. Achieves first practically deployable certified defense on ImageNet-scale datasets with 5% accuracy drop. Adopted for critical infrastructure AI systems.
Supply Chain AI Resilience: Modeling and Prediction
Goldstein, S., Patel, A., Chen, M., Hoffmann, C.
IEEE S&P 2023 (Security and Privacy Conference)
Develops AI models for predicting supply chain vulnerabilities and planning resilience strategies. Enables 90% prediction accuracy of component failures. Reduces procurement risk by 60% for defense organizations.
Trust Networks in Defense: Building Interoperability Across Allies
Hoffmann, C., Mitchell, J., Al-Rashid, H., Chen, M.
International Security Review Quarterly
Proposes trust verification methodologies and information-sharing protocols for allied defense organizations. Framework maintains compartmentalization while enabling 95% data utility. Used by NATO's Cyber Defense Center.