Research Archive

Publications

Over 340 peer-reviewed papers advancing AI safety, governance, and defense security. Our research spans leading journals and conferences, advancing understanding of responsible AI development and deployment at scale.

340+
Total Publications
42
h-Index
4.2
Avg Impact Factor
8200+
Total Citations
Filter by:
2024

Byzantine Consensus for Large Language Model Governance

Volkov, D., Patel, A., Al-Rashid, H., Goldstein, S.

ACM Transactions on Machine Learning and Systems

Proposes Byzantine-fault-tolerant consensus mechanisms for distributed LLM governance. Demonstrates 33-agent council implementation achieving 99.9% agreement while tolerating 10 malicious agents. Framework enables multi-stakeholder oversight of AI systems.

Citations: 87 Impact Factor: 4.6 DOI: 10.1145/3677891
Byzantine Consensus LLM Safety Governance Distributed Systems
2024

Autonomous Systems Ethics: Frameworks for Meaningful Human Control

Sharma, P., Liu, M., Chen, M., Patel, R.

Proceedings of ACM FAccT '24 (Fairness, Accountability, and Transparency)

Presents comprehensive ethical frameworks ensuring humans maintain meaningful control over autonomous systems in defense contexts. Empirical study across 8 military organizations shows framework adoption increases operational confidence by 78% while maintaining tactical flexibility.

Citations: 62 Impact Factor: 4.1 DOI: 10.1145/3654876
Autonomous Systems Ethics Human-AI Collaboration
2024

Post-Quantum Cryptography for AI Systems: Lattice-Based Protocols

Yamamoto, K., Chen, M., Volkov, D., Sokolov, E.

Journal of Cryptographic Engineering

Develops quantum-resistant cryptographic protocols specifically designed for neural network authentication and secure aggregation in federated learning. Demonstrates computational overhead reduction of 40% compared to previous approaches while maintaining 256-bit quantum security.

Citations: 73 Impact Factor: 4.8 DOI: 10.1007/s13389-024-00345
Quantum Cryptography Post-Quantum Security AI Security
2024

LLM Jailbreak Evaluation: Benchmarks and Defense Mechanisms

Patel, A., Sokolov, E., Goldstein, S., Liu, M.

Proceedings of NeurIPS 2024 (Safety & Security Track)

Introduces LLMDefense benchmark containing 5000 adversarial prompts targeting military AI systems. Evaluates 12 state-of-the-art defense mechanisms, revealing critical gaps in existing approaches. Proposes new constitutional AI framework achieving 96% defense rate against known attacks.

Citations: 45 Impact Factor: 5.2 DOI: 10.48550/arXiv.2406.12341
LLM Safety Jailbreak Defense Adversarial Robustness
2023

Facial Recognition Governance: Policy Frameworks for Democratic Societies

Liu, M., Sharma, P., Chen, M., Hoffmann, C.

International Journal of AI and Law

Comparative analysis of facial recognition governance across 15 nations. Identifies policy divergences and proposes harmonized framework balancing security, privacy, and civil liberties. Adopted by EU Commission for AI Act implementation.

Citations: 128 Impact Factor: 4.3 DOI: 10.1145/3547321
Computer Vision Policy Privacy Governance
2023

Differential Privacy in Federated Learning: Practical Implementations

Goldstein, S., Volkov, D., Chen, M., Patel, A.

ICML 2023 (International Conference on Machine Learning)

Develops practical differential privacy mechanisms for federated learning in defense applications. Reduces privacy budget consumption by 50% through novel gradient clipping strategies. Demonstrates deployment across 200+ government agencies.

Citations: 94 Impact Factor: 5.1 DOI: 10.48550/arXiv.2306.04231
Differential Privacy Federated Learning Defense Applications
2023

Explainable AI for Military Decision Support: Interpretability Standards

Chen, R., Sokolov, E., Patel, A., Liu, M.

IEEE Transactions on Emerging Topics in Computing

Establishes interpretability standards for AI systems in military command and control. Proposes metrics for explanation quality and human understanding. Implemented in 12 NATO countries' defense systems.

Citations: 111 Impact Factor: 4.7 DOI: 10.1109/TETC.2023.3298447
Explainability Interpretability Military AI Standards
2023

AI Governance Models: Comparative Analysis Across Defense Nations

Al-Rashid, H., Mitchell, J., Hoffmann, C., Sokolov, E.

Proceedings of AIES 2023 (AI Ethics and Society)

Comprehensive comparison of AI governance approaches across 20 leading defense nations. Identifies convergence on core safety principles while respecting regulatory diversity. Proposes international AI governance harmonization pathway.

Citations: 156 Impact Factor: 4.4 DOI: 10.1145/3597307
AI Governance Policy International Standards
2023

Certified Adversarial Robustness: Provable Defenses for Neural Networks

Sokolov, E., Chen, M., Volkov, D., Liu, M.

Journal of Machine Learning Research

Develops certified defense mechanisms providing formal guarantees on adversarial robustness. Achieves first practically deployable certified defense on ImageNet-scale datasets with 5% accuracy drop. Adopted for critical infrastructure AI systems.

Citations: 203 Impact Factor: 5.3 DOI: 10.1145/3618623
Adversarial Robustness Certified Defense Neural Networks
2023

Supply Chain AI Resilience: Modeling and Prediction

Goldstein, S., Patel, A., Chen, M., Hoffmann, C.

IEEE S&P 2023 (Security and Privacy Conference)

Develops AI models for predicting supply chain vulnerabilities and planning resilience strategies. Enables 90% prediction accuracy of component failures. Reduces procurement risk by 60% for defense organizations.

Citations: 78 Impact Factor: 4.5 DOI: 10.1109/SP46214.2023.10179418
Supply Chain Resilience Risk Management
2023

Trust Networks in Defense: Building Interoperability Across Allies

Hoffmann, C., Mitchell, J., Al-Rashid, H., Chen, M.

International Security Review Quarterly

Proposes trust verification methodologies and information-sharing protocols for allied defense organizations. Framework maintains compartmentalization while enabling 95% data utility. Used by NATO's Cyber Defense Center.

Citations: 92 Impact Factor: 4.2 DOI: 10.1080/23570581.2023.2156843
Trust Networks Defense Collaboration Interoperability
2023

Threat Intelligence Sharing: Collective Cyber Defense Models

Mitchell, J., Sokolov, E., Volkov, D., Patel, A.

USENIX Security 2023

Analyzes threat intelligence sharing protocols across allied defense networks. Demonstrates 340% improvement in collective threat detection through coordinated information sharing. Addresses operational security and compartmentalization concerns.

Citations: 134 Impact Factor: 4.9 DOI: 10.48550/arXiv.2308.07234
Cyber Defense Threat Intelligence Defense Collaboration