Skip to main content
Skip table of contents

Risk Factors in Mend AI

Overview

Risk Factors in Mend AI help you prioritize findings by highlighting models in your inventory that may warrant a more immediate call-to-action compared to other models.

They are listed in multiple locations across the Mend AI Native AppSec Platform user interface:

  1. The Risk Factors column in the AI Models table:

image-20250317-112323.png
  1. The model’s side-panel:

    image-20251010-132142.png

Mend AI Risk Factors

  • No Findings – No known vulnerabilities detected.

  • False-Positive – Safe, i.e., reported by Hugging Face as unsafe but refuted by the Mend AI Research team.

    image-20250320-193056.png
  • Confirmed Unsafe – Unsafe, i.e., reproduced and verified by the Mend AI Research team.

    image-20260114-003405.png
  • Unconfirmed Unsafe – Suspected Unsafe, i.e., tagged by Hugging Face as unsafe, but not reviewed by the Mend AI Research team.

    image-20260114-003244.png
  • Conversational System Prompt - System prompt classified as a conversational AI interface. These interfaces tend to have higher risk exposure due to their fuzzy and open-ended nature.

    image-20260114-003134.png

AI Risk Factors in Automation Workflows

The Mend AI Risk Factors can be used in the Mend AppSec Platform’s Automation Workflows, by selecting Security → AI Analysis as the Triggering Event.

In the example below, the event condition is the detection of a conversational interface (system prompt) or a confirmed unsafe Hugging Face model.

image-20260113-141714.png

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.