Recently while testing out the latest version of the aks-preview
Azure CLI extension, I stumbled across a new preview feature: Managed Namespaces in Azure Kubernetes Service (AKS). At the time of writing, this feature hasn’t been officially documented by Microsoft, but you can already try it for yourself using the latest CLI tools.
In this blog post, I’ll walk you through what Managed Namespaces are, what you can do with them, and how you can start using this feature today in your own AKS clusters.
What Are Managed Namespaces?
In standard Kubernetes, namespaces are a way to separate resources logically. You might use them to group applications, environments, or teams. However, these namespaces are often just labels, they don’t enforce things like CPU limits, memory constraints, or network policies out of the box.
Managed Namespaces in AKS aim to close that gap. With this new feature, you can define namespace-level policies such as:
- Default CPU and memory resource requests and limits
- Ingress and egress network policies
- Delete behaviour (should the namespace be cleaned up or preserved)
- How AKS handles existing namespaces (called an adoption policy)
- Metadata like tags, labels, and annotations
The goal is to provide stronger governance and consistency across your workloads, especially in multi-team or multi-tenant clusters.
Getting Started
Before you can use this feature, you’ll need the latest version of the aks-preview
extension. If you haven’t got it already, or if you want to make sure you’re on the latest version, use the following command:
az extension add —name aks–preview —upgrade |
Unlike some other AKS preview features, there’s currently no need to register a feature flag to use Managed Namespaces.
Creating a Managed Namespace
Let’s start with a basic example. Suppose you want to create a new namespace called team-a
with some CPU and memory limits.
Here’s how you’d do it:
az aks namespace add \ —resource–group my–rg \ —cluster–name my–aks \ —name team–a \ —cpu–request 500m \ —cpu–limit 1 \ —memory–request 1Gi \ —memory–limit 2Gi |

This creates a new namespace inside your AKS cluster and enforces the specified resource requests and limits for any pods created inside that namespace.
You can also define other properties at creation time. For example, if you want to specify network rules, tags, labels, and annotations, you can do so in a single command:
az aks namespace add \ —resource–group my–rg \ —cluster–name my–aks \ —name team–a \ —cpu–request 500m \ —cpu–limit 1 \ —memory–request 1Gi \ —memory–limit 2Gi \ —ingress–policy AllowSameNamespace \ —egress–policy AllowAll \ —delete–policy Keep \ —adoption–policy Never \ —tags team=platform \ —labels env=dev \ —annotations owner=devteam |
That’s a lot of control from just one command.
Viewing and Managing Your Namespaces
Once your namespace is created, you might want to check its status or modify its configuration. AKS includes several commands to help with that.
Show Namespace Details
To view information about a specific managed namespace:
az aks namespace show \ —resource–group my–rg \ —cluster–name my–aks \ —name team–a |
This will return JSON with all the configured policies and metadata for the namespace.
List All Managed Namespaces
To see all the managed namespaces in a given cluster:
az aks namespace list \ —resource–group my–rg \ —cluster–name my–aks |
This is especially useful if you’re managing many environments or teams within a single cluster.
Update a Managed Namespace
If you need to change any of the settings after creation, say you want to adjust the memory limits or update the labels, you can use the update
command.
For example:
az aks namespace update \ —resource–group my–rg \ —cluster–name my–aks \ —name team–a \ —cpu–request 600m \ —memory–request 2Gi \ —labels env=prod |
Delete a Managed Namespace
When you no longer need a namespace, or want to remove its configuration from AKS, you can delete it:
az aks namespace delete \ —resource–group my–rg \ —cluster–name my–aks \ —name team–a |
This will respect the --delete-policy
you configured earlier.
Scoped Access: Get Credentials for a Namespace
One really interesting feature is the ability to retrieve credentials scoped to a specific managed namespace. This can be handy if you want to give access to a team but restrict them to only their namespace.
You can generate a kubeconfig just for the managed namespace using:
az aks namespace get–credentials \ —resource–group my–rg \ —cluster–name my–aks \ —name team–a \ —file ~/.kube/team–a.config |
This lets your team interact with their namespace using kubectl without touching the rest of the cluster.
All the Available Parameters
Here’s a quick overview of the key parameters available for az aks namespace add
and update
:
Parameter | Description |
---|---|
--cpu-request |
Set the default CPU request |
--cpu-limit |
Set the default CPU limit |
--memory-request |
Set the default memory request |
--memory-limit |
Set the default memory limit |
--ingress-policy |
Ingress control (AllowAll , DenyAll , AllowSameNamespace ) |
--egress-policy |
Egress control (AllowAll , DenyAll ) |
--delete-policy |
What to do when deleting the namespace (Keep , Delete ) |
--adoption-policy |
How to handle existing namespaces (Never , Sync ) |
--labels |
Add Kubernetes labels |
--annotations |
Add Kubernetes annotations |
--tags |
Add Azure resource tags |
--no-wait |
Run the command without waiting for it to complete |
You can combine any of these to suit your governance model.
Final Thoughts
Managed Namespaces are a welcome addition to AKS. They bring more structure and consistency to how namespaces behave across your clusters. Whether you’re trying to enforce team-level limits or isolate networking rules, this feature gives you a centralised way to do it, and you don’t need to rely on separate Kubernetes manifests or admission controllers.
It’s still early days, so things could change, and more features may get added before GA. For now, this is a great opportunity to get familiar with the tooling and think about how it could fit into your own AKS deployments.
If you try it out, let me know how it goes. As always, test new features in a dev cluster before rolling anything into production.