The Kubernetes Current Blog

Unlocking Four Requirements for Enterprise-Grade Kubernetes

For enterprises of all shapes and sizes, Kubernetes has become a go-to-choice for shipping software and improving delivery time, visibility, and control of CI/CD workflows.

With more organizations enjoying the benefits of Kubernetes, it is all the more crucial to integrate enterprise-grade Kubernetes across the business pipeline. In this article, Kyle Hunter, head of product marketing, Rafay Systems, identifies some of the critical requirements and best practices for K8s to meet those requirements.

For enterprises of all shapes and sizes, Kubernetes has become a go-to-choice for shipping software and improving delivery time, visibility, and control of CI/CD workflows. But integrating enterprise-grade Kubernetes management practices that cover your entire pipeline – from code to cloud – can be challenging. Critical requirements demand best practices for K8s to meet those requirements. These critical items will span four key topic areas – source code, CI/CD integration, Kubernetes cluster lifecycle management, and workload administration. Let’s get started!

Source Code

When it comes to source code, it all starts with using Git-based workflows for automated software delivery and declarative infrastructure with tracking and to support rollbacks when there are failures. It is a best practice to keep secrets encrypted and outside the container. Implementing internal training and awareness programs is the best way to ensure this is happening is relatively simple. By ensuring this is a best practice for your organization and becomes a routine part of the development process, you avoid exposing them during a CI/CD deployment. Similarly, it is essential to ensure that application secrets are not embedded into your Helm charts or Kubernetes YAML files.

CI/CD Pipelines

Moving on to your CI/CD pipeline, one critical item to establish a sound security posture, especially before being used in production. It is a best practice to test and scan container images for vulnerabilities before they are uploaded to your container registry (or repo). Many tools on the market help with this which can be embedded directly into your CI/CD pipeline. This ensures that this critical action is taken every time and can be an automated part of your development process. This can also help meet another critical requirement, which is to ensure that you have a review and approval process for third-party container images. The same tools you use for your own container images can scan these third-party images before using them in your cluster. For your container base OS, it’s a good idea to use a lightweight base operating system for your container image, and it should include the required shell as well as debugging tools. However, which OS is customer dependent and must be chosen case-by-case for your needs.

Kubernetes Cluster Lifecycle Management

At the Kubernetes cluster level, there are many critical requirements for enterprise-level K8s management – so many that coming up with an exhaustive list would turn this into an eBook. With that in mind, we are going to focus on the most critical items from our customers – but keep in mind this is not a complete list.

First, look at your cluster configuration as it relates to HA (high availability). To achieve enterprise-grade Kubernetes, ensure that your K8s master is architected and deployed in a multi-master HA configuration to avoid a single point of failure. Fortunately, many top managed K8s providers (like Amazon EKS) make this simple by deploying the K8s master in a HA configuration with three masters across availability zones (AZs). When it comes to upstream K8s, there are some solutions that deploy upstream K8s in a multi-master HA configuration by default.

When it comes to K8s versioning and upgrading, creating an upgrade strategy that fits your needs with the environment’s availability and reliability is critical to ensuring minimal disruptions to your workloads. There are frequent updates to Kubernetes and its ecosystem for security updates, bug fixes, and new features being rolled out. It is a good idea to regularly upgrade your clusters to the version that meets the quality and stability. However, doing so requires a reliable, repeatable, and efficient upgrade process and tooling. Built-in monitoring tools help ensure your administrators have complete visibility and insight into your Kubernetes environments and can promote upgrades in a controlled and predictable manner – another key to K8s success.

Workload Administration

Finally, let’s address some important topics around workload administration. A workload is an application running in one or more K8s pods. It’s a best practice to develop a labelling scheme that helps simplify management and consistency using parameters such as location, which could be a physical location (e.g., country, city) or cloud provider, an environment (e.g., Dev, Test, Prod), the application (e.g., Finance, CRM, HR), and the role (e.g., Web, DB). Having consistent labelling across all K8s environments makes policies more flexible and effective.

To keep clusters healthy, it is also essential to specify resource requests and limits (e.g., CPU, memory, namespaces). Resource quotas, for example, help guarantee compute resources while helping to control costs. With best-in-class monitoring tools, you can see any pods that are misconfigured and address them as needed.

Another critical topic is access control. Integrating with Kubernetes Role-Based Access Control (RBAC) is a best practice and defining cluster-wide permissions. Securing access to the Kubernetes API is also critical to controlling and limiting who can access clusters and what actions they are allowed to perform as the first line of defense. Identifying “who” needs “what” access to “which” resource becomes challenging, especially at scale, leading many to look for a unified way of managing access across clusters and clouds.

Modernization Simplified

Kubernetes offers the promise of modernization by simplifying the deployment and management of container-based workloads on-premises, in the public cloud and at the edge. Enterprises have widely deployed containers but are still looking to solve for the agility and ultimately business value primarily due to operational challenges — it’s challenging to operate and manage Kubernetes clusters at scale. Organizations should consider these best practices and a centralized SaaS platform that is easily scalable and fully supports the K8s technology ecosystem allowing for an easier path to adoption without having to do it all yourself.

This article was originally published in Spiceworks.

Author

Trusted by leading companies