AI is no longer a passive autocomplete. “Agentic” systems can set sub‑goals, chain tools, call APIs, browse, write and run code, and remember context. That autonomy unlocks outsized productivity and a brand new, high‑velocity attack surface around instructions, tools, and data flows. Traditional cloud controls (CSPM/DLP/firewalls) don’t see or stop many of these behaviors. The new security story blends agent guardrails, least‑privilege tooling, isolation, data‑centric posture, continuous evals, and confidential computing, all governed under emerging frameworks and regulations.

From Generative to Agentic: What Changed?

Agentic AI = goal‑driven systems that plan, use tools and memory, and coordinate steps (often across multiple agents) to achieve outcomes, not just produce text. Recent surveys and industry analyses highlight agent architectures (single/multi‑agent), planning/execution loops, and tool‑calling patterns that turn models into proactive collaborators.
That shift moves risk from “what did the model say?” to “what did the model ‘do’ with my credentials, APIs, and data?”

The New Attack Surface (and Why Cloud Makes It Spiky)

  1. Prompt Injection (direct & indirect) – Adversaries hide instructions in user input or in documents/web pages an agent reads, steering it to leak secrets, exfiltrate data, or execute unintended actions via connected tools. OWASP now treats prompt injection as the top LLM risk, detailing direct, indirect, and obfuscated variants.
  2. Tool / Function Misuse – Once an agent has tool access (file systems, email, SaaS, cloud APIs), a single coerced step (e.g., “email me the last 100 S3 object names”) becomes a data loss event. Major vendors have published guidance on indirect prompt injection in enterprise workflows.
  3. LLM‑Native Worms & Multi‑Agent “Prompt Infection” – In agent swarms, malicious instructions can hop between agents and self‑replicate, turning orchestration into an attack vector. Research documents LLM‑to‑LLM propagation patterns in multi‑agent systems.
  4. Supply‑Chain Risks in Model & Tooling Ecosystems – Model poisoning and malicious plugins/connectors threaten downstream users; MITRE ATLAS catalogs real attack patterns on AI‑enabled systems (including LLM cases).
  5. RAG Grounding & Hallucination Risks – When retrieval feeds untrusted or outdated content, agents can confidently act on falsehoods. Cloud providers emphasize multi‑layered safety—including grounding checks and DLP—to mitigate leakage or policy violations.

Why cloud amplifies it: Serverless glue, vector DBs, shared secrets, broad IAM roles, and egress pathways make agent mistakes scalable. Many network‑centric controls don’t understand “prompts,” “tool calls,” or “grounding corpora,” so they miss the instruction‑layer threats entirely. Leading security voices and OWASP explicitly call out this gap.

Governance Pressure Is Real (and Near‑Term)