Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Defender for Containers is designed differently for each Kubernetes environment whether they're running in:
Azure Kubernetes Service (AKS) - Microsoft's managed service for developing, deploying, and managing containerized applications.
Amazon Elastic Kubernetes Service (EKS) in a connected Amazon Web Services (AWS) account - Amazon's managed service for running Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.
Google Kubernetes Engine (GKE) in a connected Google Cloud Platform (GCP) project - Google’s managed environment for deploying, managing, and scaling applications using GCP infrastructure.
An unmanaged Kubernetes distribution (using Azure Arc-enabled Kubernetes) - Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters hosted on-premises or on IaaS.
Note
Defender for Containers support for Arc-enabled Kubernetes clusters (AWS EKS and GCP GKE) is a preview feature.
To protect your Kubernetes containers, Defender for Containers receives and analyzes:
- Audit logs and security events from the API server
- Cluster configuration information from the control plane
- Workload configuration from Azure Policy
- Security signals and events from the node level
To learn more about implementation details such as supported operating systems, feature availability, outbound proxy, see Defender for Containers feature availability.
Architecture for each Kubernetes environment
Architecture diagram of Defender for Cloud and AKS clusters
When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, the collection of audit log data is agentless and collected automatically through Azure infrastructure with no additional cost or configuration considerations. These are the required components in order to receive the full protection offered by Microsoft Defender for Containers:
- Defender sensor: The DaemonSet that is deployed on each node, collects signals from hosts using eBPF technology, and provides runtime protection. The sensor is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender sensor is deployed as an AKS Security profile.
Note
When the Defender sensor is configured on an AKS cluster, it triggers a reconciliation process. This happens as part of the Defender for Containers plan and is expected behaviour.
- Azure Policy for Kubernetes: A pod that extends the open-source Gatekeeper v3 and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an AKS add-on. It's only installed on one node in the cluster. For more information, see Protect your Kubernetes workloads and Understand Azure Policy for Kubernetes clusters.
Defender sensor component details
Pod Name | Namespace | Kind | Short Description | Capabilities | Resource limits | Egress Required |
---|---|---|---|---|---|---|
microsoft-defender-collector-ds-* | kube-system | DaemonSet | A set of containers that focus on collecting inventory and security events from the Kubernetes environment. | SYS_ADMIN, SYS_RESOURCE, SYS_PTRACE |
memory: 296Mi cpu: 360m |
No |
microsoft-defender-collector-misc-* | kube-system | Deployment | A set of containers that focus on collecting inventory and security events from the Kubernetes environment that aren't bounded to a specific node. | N/A | memory: 64Mi cpu: 60m |
No |
microsoft-defender-publisher-ds-* | kube-system | DaemonSet | Publish the collected data to Microsoft Defender for Containers backend service where the data will be processed for and analyzed. | N/A | memory: 200Mi cpu: 60m |
Https 443 Learn more about the outbound access prerequisites |
* Resource limits aren't configurable; Learn more about Kubernetes resources limits.
How does agentless discovery for Kubernetes in Azure work?
The discovery process is based on snapshots taken at intervals:
When you enable the agentless discovery for Kubernetes extension, the following process occurs:
Create:
- If the extension is enabled from Defender CSPM, Defender for Cloud creates an identity in customer environments called
CloudPosture/securityOperator/DefenderCSPMSecurityOperator
. - If the extension is enabled from Defender for Containers, Defender for Cloud creates an identity in customer environments called
CloudPosture/securityOperator/DefenderForContainersSecurityOperator
.
- If the extension is enabled from Defender CSPM, Defender for Cloud creates an identity in customer environments called
Assign: Defender for Cloud assigns a built-in role called Kubernetes Agentless Operator to that identity on subscription scope. The role contains the following permissions:
- AKS read (Microsoft.ContainerService/managedClusters/read)
- AKS Trusted Access with the following permissions:
- Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/write
- Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/read
- Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/delete
Learn more about AKS Trusted Access.
Discover: Using the system assigned identity, Defender for Cloud performs a discovery of the AKS clusters in your environment using API calls to the API server of AKS.
Bind: Upon discovery of an AKS cluster, Defender for Cloud performs an AKS bind operation by creating a
ClusterRoleBinding
between the created identity and the KubernetesClusterRole
aks:trustedaccessrole:defender-containers:microsoft-defender-operator. TheClusterRole
is visible via API and gives Defender for Cloud data plane read permission inside the cluster.
Note
The copied snapshot remains in the same region as the cluster.
Next steps
In this overview, you learned about the architecture of container security in Microsoft Defender for Cloud. To enable the plan, see: