Education, Science, Technology, Innovation and Life
Open Access
Sign In

A Four-Layer Security Governance Framework for LLM-Based AI Agents

Download as PDF

DOI: 10.23977/jaip.2025.080406 | Downloads: 0 | Views: 43

Author(s)

Yiang Gao 1, Shanshan Wu 2

Affiliation(s)

1 China Telecom Research Institute, Shanghai, 201315, China
2 China Telecom, Beijing, 100033, China

Corresponding Author

Yiang Gao

ABSTRACT

As artificial intelligence advances from "dialogue intelligence" to "decision intelligence," AI agents built upon Large Language Models (LLMs) are becoming a crucial force driving transformation across industries. However, their autonomous capabilities in perception, decision-making, memory, and execution introduce systemic security risks far beyond traditional LLM vulnerabilities. This paper presents a four-layer security governance framework covering the full Perception–Decision–Memory–Execution lifecycle to mitigate risks such as multi-source perception failures, decision hallucination, memory poisoning, and malicious execution. By systematically mapping each lifecycle phase to security requirements and controls, this framework provides theoretically grounded and practically applicable guidance for the trustworthy and secure development of AI agents.

KEYWORDS

AI agents; Security governance; Prompt injection; Memory poisoning; Autonomous agents; LLM safety; Tool security

CITE THIS PAPER

Yiang Gao, Shanshan Wu, A Four-Layer Security Governance Framework for LLM-Based AI Agents. Journal of Artificial Intelligence Practice (2025) Vol. 8: 49-55. DOI: http://dx.doi.org/10.23977/jaip.2025.080406.

REFERENCES

[1] Z. Li, H. Wang, and M. Chen, "Security of LLM-based agents regarding attacks, defenses, and applications: A comprehensive survey," Information Fusion, vol. 110, pp. 1–25, Jan. 2026.
[2] J. Patel, R. Gupta, and S. Kumar, "Security concerns for Large Language Models: A survey," Journal of Information Security and Applications, vol. 85, pp. 103–118, Dec. 2025.
[3] M. Rodriguez, T. Johnson, and A. Lee, "SpAIware: Uncovering a novel artificial intelligence attack vector through persistent memory in LLM applications and agents," Future Generation Computer Systems, vol. 162, pp. 44–59, Feb. 2026.
[4] Y. Zhang, Q. Liu, and L. Sun, "A-MemGuard: A proactive defense framework for LLM-based agent memory," arXiv preprint arXiv: 2510.02373, Oct. 2025.
[5] H. Ren, X. Zhao, and P. Wang, "BlindGuard: Safeguarding LLM-based multi-agent systems under unknown attacks," arXiv preprint arXiv: 2508.08127, Aug. 2025.
[6] A. Smith, D. Torres, and K. Patel, "Agent Security Bench (ASB): Formalizing and benchmarking attacks and defenses in LLM-based agents," arXiv preprint arXiv: 2410.02644, Oct. 2024.

Downloads: 16796
Visits: 596203

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.