Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Threat protection for AI services in Microsoft Defender for Cloud protects AI services on an Azure subscription by providing insights to threats that might affect your generative AI applications.
Prerequisites
Read the Overview - AI threat protection.
You need a Microsoft Azure subscription. If you don't have an Azure subscription, you can sign up for a free subscription.
Enable Defender for Cloud on your Azure subscription.
We recommend not opting out of prompt-based triggered alerts for Azure OpenAI content filtering. Opting out and removing that capability can affect Defender for Cloud's ability to monitor and detect such attacks.
Enable threat protection for AI services
Enable threat protection for AI services.
Sign in to the Azure portal.
Search for and select Microsoft Defender for Cloud.
In the Defender for Cloud menu, select Environment settings.
Select the relevant Azure subscription.
On the Defender plans page, toggle the AI services to On.
Enable user prompt evidence
With the AI services threat protection plan enabled, you can control whether alerts include suspicious segments directly from your user's prompts, or the model responses from your AI applications or resources. Enabling user prompt evidence helps you to triage and classify alerts and your user's intentions.
User prompt evidence consists of prompts and model responses. Both are considered your data. Evidence is available through the Azure portal, Defender portal, and any attached partners integrations.
Sign in to the Azure portal.
Search for and select Microsoft Defender for Cloud.
In the Defender for Cloud menu, select Environment settings.
Select the relevant Azure subscription.
Locate AI services and select Settings.
Toggle Enable user prompt evidence to On.
Select Continue.