The Kubernetes Current Blog

Mastering Kubernetes Namespaces: Advanced Isolation, Resource Management, and Multi-Tenancy Strategies

Kubernetes namespaces let you separate logical groups of resources within a single Kubernetes cluster. They’re used to share clusters between different apps and provide platform teams with many benefits including improved operating efficiency, less cluster sprawl, and reduced infrastructure spending—a particular benefit given spinning up new clusters can quickly become expensive.

In addition to isolating tenanted resources, namespaces make it possible to enforce resource management, security, and governance requirements. In this article, we’ll explore some advanced techniques and best practices for using namespaces to manage multi-tenant Kubernetes deployments at scale.

Exploring Kubernetes Namespace Challenges and Pain Points

It’s easy to create Kubernetes namespaces:

$ kubectl create namespace my-team
namespace/my-team created

Now you have a new namespace called my-team that you can create objects within. However, this namespace isn’t yet ready for multi-tenant use. Objects within the namespace can still communicate with ones from other namespaces and no resource utilization, security, or governance constraints have been applied.

To effectively utilize namespaces, you need to support them with granular policies, quotas, and access controls that allow you to enforce consistency and maintain compliance with your organizational standards. However, Kubernetes cluster administrators and architects can run into roadblocks when preparing to enable these vital constraints.
It’s often troublesome to implement namespace best practices at scale because it’s unclear which features are required or how to optimally configure them. Incorrect management can also create workflow hindrances that prevent team members from effectively utilizing the Kubernetes cluster.

Furthermore, plain namespaces don’t always produce a cluster layout that accurately matches the way your teams work. Kubernetes uses a flat namespace model where all namespaces are direct children of a single physical cluster root. But many teams require a hierarchical structure that allows sub-teams, working groups, and individuals to have their own nested namespaces. This can improve operational flexibility.

Understanding Complexities of Namespace Management

A basic namespace provides a thin separation layer. It scopes the names of resources it contains, allowing two objects with the same name to exist in the cluster, provided they belong to different namespaces. However, objects in unrelated namespaces can still freely communicate with each other, which is often undesirable when they’re owned by separate teams. All objects also continue to share access to the cluster’s resources, so high CPU usage by a Pod in one namespace could negatively affect Pods in other namespaces too.

Achieving proper multi-tenancy for your Kubernetes cluster requires more sophisticated namespace management. Several techniques can be used to implement a namespace system that enables robust tenant isolation and organization:

  • Hierarchical Namespaces — The optional Hierarchical Namespace Controller (HNC) makes it possible to nest namespaces, allowing you to precisely replicate your organization’s team structure within Kubernetes.
  • Namespace-level Resource QuotasResource Quotas prevent individual namespaces from consuming excessive cluster resources, ensuring stable performance for all your deployments.
  • Cross-namespace Network PoliciesNetwork Policies provide crucial traffic management capabilities that can be used to prevent Pods communicating across namespaces. This improves security by providing greater isolation for your tenanted resources.
  • Admission controllersAdmission Controllers contain code that decides whether an object can be added to your cluster. You can use them to enforce security and compliance requirements for your namespaces.
  • CRDs and OperatorsCustom Resource Definitions (CRDs) and Kubernetes Operators allow you to automate more namespace operations, such as dynamic on-demand namespace provisioning and centralized management.

Using these mechanisms lets you obtain the best results from Kubernetes in multi-tenant situations.

Implementing Advanced Kubernetes Namespace Management

Multiple namespaces let you efficiently utilize your Kubernetes cluster by sharing it between different users, teams, and apps. Let’s look at how some of the techniques discussed above implement robust namespace-level isolation to organize your workloads, without requiring you to create costly additional physical clusters.

Using Hierarchical Namespaces

Hierarchical namespaces aren’t natively supported by Kubernetes but they can be created using the Hierarchical Namespace Controller (HNC). This optional component was developed by the Kubernetes Working Group for Multi-Tenancy in response to user demands for more flexibility in how namespaces are arranged.

Using hierarchical namespaces allows you to model your entire organization graph inside Kubernetes. For example, you could create a top-level namespace for your development team, then add sub-namespaces inside it for specific team members, apps, or workflow states. Child namespaces inherit policies such as RBAC rules from their parents while providing a permission model that lets namespace members create sub-namespaces, even if they’re unable to add a new root level namespace to the physical cluster.

To get started with hierarchical namespaces, you need to first install the controller in your cluster:

$ kubectl apply -f https://github.com/kubernetes-sigs/hierarchical-namespaces/releases/download/v1.1.0/default.yaml

You can check the latest release version and learn more about the supported installation types in the documentation.

Next, install the Kubectl plugin—it’s easiest to use the Krew plugin manager:

$ kubectl krew install hns

Now you’re ready to use HNC to set up a namespace hierarchy. First, create your top-level namespace using the regular kubectl create namespace command:

$ kubectl create namespace dev-team
namespace/dev-team created

Next, use kubectl hns create to add a new namespace that’s a child of dev-team:

$ kubectl hns create frontend -n dev-team
Successfully created "frontend" subnamespace anchor in "dev-team" namespace

You can repeat this process to create more deeply nested namespaces:

$ kubectl hns create app-1 -n frontend
Successfully created "app-1" subnamespace anchor in "dev-team" namespace

The kubectl hns tree command lets you visualize the namespace hierarchy you’ve created:

$ kubectl hns tree
dev-team
└── [s] frontend
    └── [s] app-1

[s] indicates subnamespaces

When you create a child namespace, HNC automatically propagates any RBAC Roles and RoleBindings that exist in its parent. This means any user with access to the parent namespace can immediately carry out the same operations within the child.

Although only Roles and RoleBindings are replicated by default, you can optionally specify that other types of namespaced object—such as Network Policies or Resource Quotas—should be included too. HNC also sets labels on namespaces that let you scope selectors used in policies and other objects to match only specific child namespaces.

Setting Resource Quotas

Resource Quotas enforce namespace-level resource limits for your cluster. They let cluster administrators ensure a cluster’s resources are fairly distributed between its tenants.

There are several types of quota:

  • CPU and memory quotas define the amount of physical resources a namespace can use.
  • Storage quotas configure how much storage is available to a namespace.
  • Object count quotas set the maximum number of objects of a particular type (such as Pods or Deployments) that a namespace can contain.

Resource Quota support is normally enabled by default for new Kubernetes clusters. You can configure a quota for a namespace by creating a ResourceQuota object and setting its metadata.namespace field to the name of the target namespace:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: demo-quota
  namespace: dev-team
spec:
  hard:
    requests.memory: "100M"
    limits.memory: "100M"

This simple quota prevents the total memory requests and limits of the objects in the dev-team namespace from exceeding 100 Mb. Resource quota constraints can be either hard or soft; the hard limit used here blocks new resources from being created, whereas a soft constraint triggers warning notifications but doesn’t actually enforce the limit.

Use kubectl to add your ResourceQuota object to your cluster:

$ kubectl apply -f quota.yml
resourcequota/demo-quota created

You can inspect your quota using the kubectl get resourcequota command:

$ kubectl get resourcequota -n dev-team
NAME         AGE   REQUEST                   LIMIT
demo-quota   12s   requests.memory: 0/100M   limits.memory: 0/100M

The resources that are currently counting against the configured requests and limits are displayed. They’re zero because no objects yet exist in the namespace.

To see the quota in action, first save the following Pod manifest:

apiVersion: v1
kind: Pod
metadata:
  name: demo-pod
  namespace: dev-team
spec:
  containers:
    - name: nginx
      image: nginx:latest
      resources:
        requests:
          memory: "50M"
        limits:

Next, use kubectl to create the Pod in your cluster:

$ kubectl apply -f pod.yaml
pod/demo-pod created

Now you can repeat the kubectl get resourcequota command to see that the Pod is counting against the quota’s limits:

$ kubectl get resourcequota -n dev-team
NAME         AGE     REQUEST                     LIMIT
demo-quota   4m35s   requests.memory: 50M/100M   limits.memory: 50M/100M

Finally, save this second Pod manifest:

apiVersion: v1
kind: Pod
metadata:
  name: demo-pod-2
  namespace: dev-team
spec:
  containers:
    - name: nginx
      image: nginx:latest
      resources:
        requests:
          memory: "150M"
        limits:
          memory: "150M"

This time you’ll see the object is rejected. Its resource requests and limits exceed the available capacity permitted by the quota:

$ kubectl apply -f pod2.yaml
Error from server (Forbidden): error when creating "pod2.yaml": pods "demo-pod-2" is forbidden: exceeded quota: demo-quota, requested: limits.memory=150M,requests.memory=150M, used: limits.memory=50M,requests.memory=50M, limited: limits.memory=100M,requests.memory=100M

Resource quotas are therefore one of the most important tools for properly isolating Kubernetes namespaces so each tenant can reliably operate its workloads.

Configuring Network Policies

Network Policies allow you to enforce network traffic isolation for your Kubernetes Pods. Policy objects target one or more Pods with rules that prevent ingress and egress traffic, unless it matches specified criteria.

Network Policies can be used with namespaces to prevent services belonging to different tenants from communicating with each other. This improves your security posture by reducing the risk of unauthorized access by unprivileged users or attackers who gain access to your cluster.

You can create a Network Policy by writing a manifest file that configures your traffic rules. The following policy targets Pods that are labeled app: demo-app within the staging-env namespace. It specifies that ingress traffic (to the Pods) and egress traffic (from the Pods) is only permitted to and from namespaces with the team: demo-team label applied.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: demo-policy
  namespace: staging-env
spec:
  podSelector:
    matchLabels:
      app: demo-app
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
      - namespaceSelector:
          matchLabels:
            team: demo-team
  egress:
    - to:
      - namespaceSelector:
          matchLabels:
            team: demo-team

Create the staging-env namespace:

$ kubectl create namespace staging-env
namespace/staging-env created

You can then use kubectl to apply the Network Policy to your cluster:

$ kubectl apply -f networkpolicy.yaml
networkpolicy.networking.k8s.io/demo-policy created

Correctly configured Network Policies are vital to enforce strong separation between your Kubernetes namespace tenants. It’s best practice to apply a “default deny” policy to all your namespaces, ensuring that Pods that aren’t selected by another policy are prevented from making unauthorized communication:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
  namespace: staging-env
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress

The policy manifest above selects all Pods in the namespace and applies Ingress and Egress rules. Because no allowed routes are defined, the result is that all traffic involving the Pods is blocked.

Using Admission Controllers

Admission controllers are a mechanism for defining criteria that objects in your Kubernetes namespaces must meet. Unlike the other techniques we’ve discussed in this guide, admission controllers are code extensions that implement custom behaviors to validate whether object create, update, and delete operations should be permitted.

Several admission controllers are enabled by default—for example, Resource Quota limits are enforced by a dedicated admission controller. Other useful built-in controllers include NamespaceAutoProvision—which automatically creates namespaces referenced by new objects, if they don’t already exist—and PodSecurity, a system that ensures your workloads are compliant with predefined security standards.

You can also install third-party admission controllers or create your own to enforce organizational policies for your namespaces. For example, you could create an admission controller that prevents certain image registries from being used, or that ensures services in a particular namespace only listen on specific ports.

Custom controllers are implemented using a webhook-based architecture. The Kubernetes API server submits object admission requests to an HTTP server that you provide; your server’s response determines whether the object is admitted into the cluster.

Conclusion: Master Kubernetes Multi-Tenancy with Advanced Namespace Management

Understanding advanced Kubernetes namespace configuration allows you to achieve effective multi-tenancy for your clusters. Using features such as hierarchical namespaces, resource quotas, and network policies lets you transform namespaces into safely isolated workspaces for your teams, apps, and individual developers. This ensures proper separation, consistent configuration, and stable security for your resources.

Managing namespaces manually can be cumbersome so you should seek solutions that enable self-service automation for your cluster operations. Check out Rafay to automate your cloud and Kubernetes environments using simple workflows that put you in control. You can start for free to improve your developer experience with resilient Namespace-as-a-Service access.

Author

Trusted by leading companies