Export limit exceeded: 350584 CVEs match your query. Please refine your search to export 10,000 CVEs or fewer.
Search
Search Results (3 CVEs found)
| CVE | Vendors | Products | Updated | CVSS v3.1 |
|---|---|---|---|---|
| CVE-2026-31230 | 1 Trusted-ai | 1 Adversarial-robustness-toolbox | 2026-05-13 | 9.8 Critical |
| The Adversarial Robustness Toolbox (ART) thru 1.20.1 contains a command-line argument injection vulnerability in its Kubeflow component (robustness_evaluation_fgsm_pytorch.py). The script uses the unsafe eval() function to parse string values provided via the --clip_values and --input_shape command-line arguments. This allows an attacker to inject arbitrary Python code into these arguments, which will be executed when eval() is called. The vulnerability can be exploited remotely if an attacker can control these arguments (e.g., through pipeline configuration or automated scripts), leading to arbitrary code execution on the system running the ART evaluation. | ||||
| CVE-2026-31229 | 1 Trusted-ai | 1 Adversarial-robustness-toolbox | 2026-05-13 | 9.8 Critical |
| The Adversarial Robustness Toolbox (ART) thru 1.20.1 contains an insecure deserialization vulnerability (CWE-502) in its Kubeflow component's model loading functionality. When loading model weights from a file (e.g., model.pt) during robustness evaluation, the code uses torch.load() without the security-restrictive weights_only=True parameter. This allows the deserialization of arbitrary Python objects via the Pickle module. An attacker can exploit this by uploading a maliciously crafted model file to an object storage location referenced by the pipeline, or by controlling the model_id parameter to point to such a file. When the pipeline loads the model, the malicious payload is executed, leading to remote code execution. | ||||
| CVE-2026-31228 | 1 Trusted-ai | 1 Adversarial-robustness-toolbox | 2026-05-13 | N/A |
| The Adversarial Robustness Toolbox (ART) thru 1.20.1 contains a remote code execution vulnerability in its Kubeflow component. The robustness evaluation function for PyTorch models uses the unsafe eval() function to dynamically evaluate user-supplied strings for the LossFn and Optimizer parameters without any sanitization or security restrictions. An attacker can exploit this by providing a specially crafted string that contains arbitrary Python code, which will be executed when eval() is called, leading to complete compromise of the system running the ART evaluation. | ||||
Page 1 of 1.