Research on AI Security Strategies and Practical Approaches for Risk Management
DOI:
https://doi.org/10.71222/17gqja14Keywords:
AI security, risk management, Adversarial Machine Learning, AI governance, data poisoning, trustworthy AIAbstract
The rapid integration of Artificial Intelligence (AI) into critical infrastructures has exposed significant security vulnerabilities that traditional cybersecurity paradigms fail to address. Unlike deterministic IT systems, AI models are stochastic and data-dependent, making them susceptible to unique threats such as data poisoning, adversarial evasion, and model inversion. This dissertation investigates the comprehensive landscape of AI security, aiming to bridge the gap between isolated technical defenses and holistic organizational risk management. Through a systematic review and taxonomic analysis, this research categorizes AI risks across three distinct layers: data integrity, model robustness, and system deployment. The study evaluates current defense strategies, demonstrating that technical measures like adversarial training and differential privacy are necessary but insufficient when applied in isolation. Consequently, a practical AI Risk Management Framework is proposed, structured around a continuous lifecycle of mapping, measuring, managing, and monitoring risks. The findings suggest that effective AI security requires a "Defense-in-Depth" strategy that integrates robust MLOps infrastructure with rigorous governance policies. The dissertation concludes that shifting from static security controls to dynamic, lifecycle-based assurance is essential for deploying trustworthy AI systems in high-stakes environments.
References
1. N. O. Kunle-Lawanson, “The role of AI in information security risk management,” World Journal of Advanced Engineering Technology and Sciences, vol. 7, no. 2, pp. 308–319, 2022.
2. X. Qi, Y. Huang, Y. Zeng, E. Debenedetti, J. Geiping, H. Le, et al., “AI risk management should incorporate both safety and security,” arXiv preprint arXiv:2405.19524, 2024.
3. A. Habbal, M. K. Ali, and M. A. Abuzaraida, “Artificial intelligence trust, risk and security management (AI TRiSM): Frameworks, applications, challenges and future research directions,” Expert Systems with Applications, vol. 240, Art. no. 122442, 2024.
4. S. Islam, N. Basheer, S. Papastergiou, M. Ciampi, and S. Silvestri, “Intelligent dynamic cybersecurity risk management framework with explainability and interpretability of AI models for enhancing security and resilience of digital infrastructure,” Journal of Reliable Intelligent Environments, vol. 11, no. 3, Art. no. 12, 2025.
5. Y. Hu, W. Kuang, Z. Qin, K. Li, J. Zhang, Y. Gao, et al., “Artificial intelligence security: Threats and countermeasures,” ACM Computing Surveys, vol. 55, no. 1, pp. 1–36, 2021.
6. H. Jing, W. Wei, C. Zhou, and X. He, “An artificial intelligence security framework,” in Journal of Physics: Conference Series, vol. 1948, no. 1, Art. no. 012004, Jun. 2021.
7. K. Kalodanis, P. Rizomiliotis, and D. Anagnostopoulos, “European artificial intelligence act: An AI security approach,” Information & Computer Security, vol. 32, no. 3, pp. 265–281, 2024.
8. S. Yazmyradov, “A comprehensive review of AI security: Threats, challenges, and mitigation strategies,” The International Journal of Internet, Broadcasting and Communication, vol. 16, no. 4, pp. 375–384, 2024.
9. S. F. Wen, A. Shukla, and B. Katt, “Artificial intelligence for system security assurance: A systematic literature review,” International Journal of Information Security, vol. 24, no. 1, Art. no. 43, 2025.
10. O. A. Osoba and W. Welser, The Risks of Artificial Intelligence to Security and the Future of Work. Santa Monica, CA, USA: RAND Corporation, 2017.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Chong Lam Cheong (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.







