AI Agents Configuration Risk
Note:
This feature is only available with Mend AI Core or Mend AI Premium.
Contact your Customer Success Manager at Mend.io for more details.This feature is currently an open beta.
Overview
AI agents are increasingly defined through version-controlled configuration files that specify prompts, tool access, permissions, workflows, and integrations. While often treated as simple configuration, these files define the AI system’s behavior and attack surface.
Misconfigured agent configurations can enable:
Command execution
Credential exposure
Data exfiltration
Permission escalation
Policy bypass
Prompt injection
Mend AI extends static security analysis to AI agent configuration files — treating them as code and enforcing security controls before they reach production.
The capability provides:
Discovery of agent configuration files
Static risk analysis
Severity classification
Actionable mitigation guidance
Agent configuration scanning supports development-time assistants (such as Cursor, Claude Code, and Windsurf), declarative runtime agents (such as OpenClaw), and other configuration-driven agent frameworks.
This enables organizations to secure the AI control plane with the same discipline applied to application code and Infrastructure as Code.
Prerequisites
A relevant Mend AI entitlement for your organization.
Your organization’s consent to using AI features (via an addendum in your Mend.io contract).
The Agent Configuration Table
When in the context of an application/project, click the Agent Configuration button on the left-pane menu:
This will take you to the Agent Configuration table, listing information about the configuration risk and the aggregated findings associated with it.

Click anywhere on a configuration risk row to spawn its side-panel, listing information about the findings associated with the configuration risk, including their IDs, Severity, Description, etc.

Supported Agent Configuration Files
The following agent configuration formats are currently supported:
Agent / Platform | Supported Configuration Files |
|---|---|
Cursor |
|
Claude Code |
|
GitHub Copilot |
|
OpenAI Codex CLI |
|
Windsurf |
|
Aider |
|
Continue.dev |
|
OpenClaw |
|
Generic Agent Definitions |
|
Additional Tables and Widgets
The AI Agent Configuration Findings are also listed under the Agent Configurations Security Findings column in the Applications and Projects views. If the column is not visible, make sure to add it via the Columns menu on the right.

Relevant information is also available via the AI Security Dashboard in the form of the Vulnerable Agent Configurations widget, which displays the number of vulnerable agent configurations out of the total number of agent configurations.
Clicking the main number will take you to the Applications view.
Risk Coverage
Agent configuration files are evaluated against a set of AI-specific security controls.
Risk | Category | Severity | What It Detects |
|---|---|---|---|
MAI-AC-01 | Prompt Injection | Critical | "Ignore previous instructions", role hijacking, system tags |
MAI-AC-02 | Command Execution | Critical | curl|bash, pip install, sudo, eval/exec instructions |
MAI-AC-03 | File Exfiltration | Critical | Instructions to read .env, .ssh, credentials, keychains |
MAI-AC-04 | Credential Access | Critical | Instructions to extract/echo API keys, tokens, passwords |
MAI-AC-05 | Network Exfiltration | Critical | webhook.site, ngrok, data upload instructions |
MAI-AC-06 | Permission Escalation | High | Auto-approve, skip review, wildcard permissions |
MAI-AC-07 | Persistence | High | Cron, shell profiles, git hooks, startup scripts |
MAI-AC-08 | Dangerous MCP Config | High | Remote packages, hardcoded secrets, wildcard tools |
MAI-AC-09 | Obfuscated Content | High | Base64 payloads, zero-width chars, unicode tricks |
MAI-AC-10 | Approval Bypass | High | Social engineering to click yes, skip confirmation |
Each finding includes:
Risk category
Severity
Affected file
Code snippet (where applicable)
Mitigation guidance