Mend AI
Note: Mend AI is available as part of the Mend AppSec Platform.
Some features require a Mend AI Premium entitlement for your organization.
Please contact your Customer Success Manager at Mend.io to learn about enabling Mend AI.
Overview
One of the main challenges of securing AI-powered applications is that each application interacts with AI differently; an AI component could create a vulnerability in one application but not in another, further complicating the process of securing applications using AI.
At Mend.io, we prioritize securing AI components, leveraging existing risk management strategies, processes, and tooling to uncover the unique risks of AI.
Mend.io provides a single, comprehensive platform for securing an organization's entire codebase, including the AI components within it. We believe in integrating AI security seamlessly into existing workflows for maximum efficiency and minimal disruption.
Mend AI (included with the Mend AppSec Platform)
Mend AI detects models and frameworks used in applications, providing full visibility into the AI components used in your code. With Mend AI you gain:
AI Component Inventory Management and AI-BoMs
Discover and manage all AI models and frameworks used in your applications with a comprehensive and continuously updated inventory. This includes what is referred to as “Shadow AI” - previously unknown and possibly unauthorized components that are very hard to detect.
Mend AI Premium (add-on to Mend AppSec Platform)
Mend AI Premium expands on the capabilities available with Mend AI in the Mend AppSec Platform to further secure AI-powered applications. With Mend AI Premium, organizations gain actionable insights into their AI model and framework inventory and the risks those components introduce - both shared risks that come just from the presence of the component AND previously hard-to-detect behavioral risks that are unique to the application. With Mend AI Premium, you gain:
AI Component Risk Insights
Gain actionable insights on known risks tied to AI models, including licensing, public security vulnerabilities, and malicious packages.
AI Behavioral Risks (Red-teaming)
Identify risks unique to your AI-powered application, your data, and your concerns by using prebuilt, customizable tests to verify your application’s security against threats like prompt injection, context leakage, and data exfiltration.
Proactive Policies and Governance
Govern AI components throughout the software development lifecycle with Mend.io’s robust policy engine and powerful automation workflows.
Getting it done
Prerequisites
Your Mend organization has a Mend AI / Mend AI Premium entitlement.
Your Mend organization has access to the Mend AppSec Platform to view the results.
Mend CLI is installed with the latest version (v25.2.1 or newer) OR an active Mend for GitHub.com repository integration.
Mend AI’s discovery is at the code-level, not the artifact-level (contradictory to earlier iterations of this offering).
Scan your AI Models with the Mend CLI
To run a scan and generate results using the new AI BoM report capability, follow the steps outlined in our Scan your open source components (SCA) with the Mend CLI documentation for initiating a scan. Once the scan is completed, you can view the AI BoM report in the Mend Platform to analyze the results.
Scan your AI Models with the Mend for GitHub.com Repository Integration
Mend AI results will be available automatically upon scanning your repositories with the integration.
View the results in the Mend Platform
To view your AI BoM (Bill of Materials) report screen, please refer to our View your AI Bill of Materials (AI BoM) Report documentation.
Supported Large Language Models (LLM) Source Repositories
Mend AI currently provides support for AI models sourced from Kaggle (for model artifact scanning) and Hugging Face. We are actively working to expand our repository coverage to include more hosts soon.