Skip to main content
Skip table of contents

Risks in AI Components

Are there Public CVEs for AI Models?

As of January 2025, there are no standardized Common Vulnerabilities and Exposures (CVEs) for AI models in the same way they exist for traditional software and hardware systems. However, they do exist for the software libraries, frameworks, and tools used to create, train and deploy these models.

  1. CVEs have been issued for various machine learning libraries and frameworks, including TensorFlow, PyTorch, scikit-learn, and Keras.

  2. CVEs also exist for various tools in the Machine Learning (ML) ecosystem, such as Jupyter Notebooks, which are commonly used for data analysis and model development.

These CVEs can cover a range of issues, from memory corruption vulnerabilities that could lead to code execution, to information disclosure bugs and to denial-of-service flaws. They play a crucial role in the security of the overall machine learning infrastructure, even if they don't directly address vulnerabilities in the models themselves.

Risk Identifiers in the Mend AppSec Platform

Generally speaking, WS is often used for zero-day vulnerabilities without a CVE, while MSC is used for identified malicious packages.

Aspect

WS

MSC

Purpose

Identification of vulnerabilities

Identification of malicious software packages

Timing

Often used for zero-day vulnerabilities

Used after detection of malicious packages

CVE Status

May not have a CVE assigned yet

Usually not directly related to CVEs

Issuance Trigger

Discovery of an important vulnerability

Detection of malicious packages by (mostly) automated systems

Primary Source

Manual analysis and incident response

Automated classification (e.g., by Defender)

Target

Specific vulnerability in software

Malicious package or software

Duration

Temporary until a CVE is assigned

Persistent identifier for the malicious package

Update Frequency

May be updated as more info becomes available

Generally static once issued

When discussing ML models, because standardized CVEs are not currently available for them, Mend.io employs its own risk reporting system using the aforementioned “WS” and “MSC" identifiers. This enables us to effectively communicate potential risks and vulnerabilities specific to ML models to you, the customer.

  1. WSs highlight and communicate unintended weaknesses or vulnerabilities in ML models.

  2. MSCs: Track and report intentionally harmful elements or malicious aspects within ML models.

By implementing this dual identification system, we can provide a more comprehensive security reporting framework for ML models. This approach allows us to differentiate between deliberate threats (MSCs) and unintentional vulnerabilities (WSs), offering you a more nuanced understanding of potential risks associated with the ML models used in your organization.

Supported Models Providers

Mend AI scans Kaggle and Huggingface models for model weaknesses. The scope of the scanning covers various model formats commonly used on Kaggle and Huggingface.

If the vulnerability is not in the ML model itself, but rather in an associated library or framework, it is treated like any other vulnerability for that programming language.

For example, a vulnerability in TensorFlow would be treated as a Python library vulnerability.

Malicious Component Detection in AI Models

Models are scanned in the Defender system for potential malicious elements. If any model is flagged as potentially malicious during this scanning process, it is automatically placed in a review queue for our Researchers.

When Researchers examine a model in this queue, they conduct a thorough analysis to determine if the alert is valid. If they confirm that the model indeed contains malicious components, they will issue an MSC (Malicious identifier) for that specific model.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.