We continue to capture concepts that enterprise customers struggle with in our “All Things Kubernetes” blog series, which we kicked off with a discussion on log aggregation. In each of these blogs, we also focus on highlighting best practices to overcome these challenges.
In this blog, we will highlight the challenges of organizing, managing and enabling access to multiple Kubernetes clusters in an organization spanning business units, teams or operational environments.
Kubernetes natively provides the means to logically separate a physical cluster into multiple virtual clusters via namespaces.
Namespaces are intended for situations where multiple users need to share the same Kubernetes cluster. Namespaces can be configured to ensure that each user or application exists within its namespace and is isolated from every other user of the cluster.
Challenges and Solutions
Although namespaces are extremely useful, they are not viable for many common scenarios. Let us review the challenges that users face for these relatively common scenarios.
Challenge #1: True Partitioning
Although namespaces provide the ability to logically partition a cluster, it is not possible to truly enforce partitioning. This means users or resources operating in the same Kubernetes cluster can access any other resource in the cluster regardless of the namespace it is operating in. The only practical solution is to use dedicated Kubernetes clusters to guarantee true separation across operational boundaries.
Some of Rafay’s customers, especially service providers, have a business requirement to operate large fleets (100s or 1000s) of geographically distributed Kubernetes clusters. These clusters are deployed to retail stores, factories and hospitals providing a managed application operations platform for application teams at these organizations.
Although all the clusters may belong to the same organization, they may be dedicated to different application teams or business units. Given the scale and geographical distribution of the fleet, there is also a practical requirement to logically organize the clusters so that they can be managed by regional operational teams that can provide timely, local support.
Some application teams are required to deploy their containerized applications to multiple geographies. This can be due to reasons described below. Again, namespaces are not a practical solution here and they will have to deploy and operate clusters in each geography.
- Regulatory and compliance requirements such as GDPR etc
- Performance/latency requirements
- Availability requirements.
It is common for organizations to ensure complete isolation between their pre-production and production environments.
Rafay enables customers to organize their fleet of Kubernetes clusters into projects. In the example below, the customer has three active projects that are mapped to operational environments: Dev, Staging and Production. Each project has dedicated clusters associated with it and only identified personnel are authorized to access resources in each project.
In addition, to reduce operational complexity, Rafay also provides a unified control plane to manage their clusters, workloads, user access, policy and security across all projects.
Some of our customers are required to operate their applications in multiple geographical regions. For example, one of them operates their applications in four (4) Amazon EKS clusters in Virginia, Frankfurt, Sydney and Singapore. In order to comply with regulatory requirements, projects in Rafay can be configured with role based access control (RBAC) ensuring that only authorized personnel from the geographical region are allowed to view/access the clusters and associated infrastructure.
Rafay’s Service Provider customers use Projects to compartmentalize operational visibility and access to their fleet of clusters. They organize their fleet of Kubernetes clusters into multiple projects. In the example below, the fleet of clusters is logically organized into three, completely isolated projects (Americas, EMEA and APJ). In addition, clusters in projects can be configured with RBAC so that they are managed and supported by local operational teams to comply with regulatory and compliance requirements.
Challenge #2: Accurate Chargebacks
Many “shared services” infrastructure Ops teams are required to implement fine grained chargebacks to dependent application teams. It is operationally cumbersome and challenging to use namespaces as the means to track utilization and implement chargebacks.
Rafay’s enterprise customers create projects for every business unit or application team. Every project can have one or many dedicated clusters. This provides a simple and streamlined process for organizational chargebacks without any incremental operational burden.
Challenge #3: Managing Upgrades
Some containerized applications have dependencies on critical software add-ons that are supported only on certain versions of Kubernetes. As a result, it may not be possible to upgrade these kubernetes clusters before an updated version of the add-on is available.
For example, some of Rafay’s customers use Kubeflow (a critical software add-on for machine learning). However, at the time this blog was published, Kubeflow is not supported on k8s v1.16 which is two versions behind then latest version.
Since namespaces are not a practical solution as a way to manage upgrades here, these customers create projects for “Machine Learning” with dedicated clusters that are on specific versions of Kubernetes.
Interested in learning more about Projects? Watch a video showcasing how customers can use Projects in Rafay to address the challenges described above.
Sign up for a free Rafay account if you want to try out the Rafay platform or Contact us if you would like to learn more about where and when to namespaces (or not).
Rafay Systems delivers a turnkey SaaS platform that automates the ongoing operations and lifecycle management for containerized applications. The platform is designed for IT and DevOps teams to instantly build and operate Kubernetes clusters, while maintaining complete governance and control over the containerized applications being deployed on clusters under management.