In this Methodology

This document outlines the comprehensive test and validation methodology used by NSS Labs to evaluate AI Protection Systems (AIPS) for their effectiveness in protecting enterprise AI environments. As artificial intelligence becomes increasingly embedded in business applications, workflows, and decision-support systems, organizations need independent validation of the security controls intended to reduce misuse, prevent policy violations, and protect sensitive data and connected systems. AI Protection Systems are designed to operate externally to the underlying AI model, application, or agent in order to apply guardrails, enforce policy, monitor interactions, and reduce the risk of malicious or unauthorized behavior.

This methodology evaluates AIPS products across the core areas that matter most in real enterprise deployments, including protection against prompt injection, prevention of harmful or unauthorized output, evasion techniques, resilience under stress and adverse conditions, policy and filter efficacy, security of agentic behavior and tool invocation, observability and auditability, and performance impact. Each test dimension is designed to represent realistic risks that enterprise customers may encounter when deploying AI systems connected to users, enterprise data, tools, APIs, and business processes. The goal is to provide enterprise buyers, security leaders, and product vendors with a clear, repeatable, and technically rigorous basis for measuring how effectively an AIPS performs under conditions that reflect real-world use and abuse scenarios.