As your Kubernetes environment grows into a multi-cluster, multi-cloud fleet, cluster and workload deployment challenges increase exponentially. It becomes critical to streamline, automate, and standardize operations to avoid having to revisit decisions or perform the same, error-prone manual tasks over and over again.
Using the right deployment tools to:
- Deploy cluster infrastructure
- Install and configure Kubernetes and associated add-on software
- Deploy and update application workloads
will reduce manual effort and the need for specific expertise, while delivering more consistent results across environments and greater stability. The right tools are essential for creating a shared services platform in which Dev, QA, Ops, and other teams are able to consume and release infrastructure, cluster resources, and apps quickly and easily.
This blog explores the challenges at the infrastructure, Kubernetes, and application workload levels along with guidelines for choosing tools that will streamline your operations.
Configuring Infrastructure for Kubernetes Deployments
The expertise required to build a Kubernetes cluster is in short supply in many organizations. You may have the skills to build clusters in your data center or in Amazon Web Services (AWS), for example, but what happens when you expand your K8s operations into GCP or Microsoft Azure? Your Kubernetes deployment tools should enable you to easily deploy infrastructure and apps anywhere – from the datacenter to public clouds to the edge – with standardized configurations that meet all your requirements.
It is especially important to choose the right strategy to make your infrastructure reliable during an application deployment or update. There are a variety of “cluster template” approaches for Kubernetes that help solve infrastructure challenges. A template defines what your cluster infrastructure looks like and automatically provisions that infrastructure. Although some solutions may use proprietary technologies like Ansible, cluster templates are often based on open-source technology such as Helm charts or Terraform.
If you’re looking at Kubernetes management solutions that support cluster templates, there are several guidelines to keep in mind. Make sure the solution:
- Works in the environments you plan to operate in
- Enables you to enforce specific guidelines and policies
- Enables templates to be easily created by your Platform team
- Enables templates to be easily consumed by your Dev, QA, and Ops users
- Detects & notifies of cluster drift in the wild, across infrastructures
- Is compatible with any infrastructure automation tools you already use
Installing and Configuring Kubernetes
Kubernetes has a reputation for being complex to deploy and operate. As your K8s environment grows, automation can help you simplify and standardize K8s deployments and maintenance so that users can configure new clusters on demand—without ignoring important policies and other guidelines. For instance, you may want all Kubernetes clusters to include a specific service mesh, ingress controller, or a monitoring tool such as Prometheus. With the right automation, add-ons like these can be consistently deployed.
There’s no lack of tools for deploying and configuring Kubernetes. Every packaged Kubernetes distribution includes some form of installer. The same goes for popular managed Kubernetes services from AWS, Microsoft Azure, Google Cloud, and others.
But you probably already see the problem—assuming you haven’t experienced it first-hand. Having different tools with different capabilities and interfaces for each environment quickly becomes unsustainable from an operational standpoint. Many organizations end up with siloed teams for each infrastructure or environment.
A variety of management services and open-source tools are emerging that address these problems. Well-known open source tools include kOps and kubespray, both developed under the auspices of Kubernetes special interest groups (SIGs). There are also a number of SaaS and hosted services. (See the blog, How a Hosted Software Delivery Model Differs from SaaS for Kubernetes Management and Operations.)
If you’re evaluating tools or services to address Kubernetes installation and lifecycle management needs, there are several guidelines to keep in mind. Make sure the solution:
- Works in the environments you use (clouds, virtual, physical)
- Enables you to specify uniform security policies
- Lets you automatically install Kubernetes add-ons
- Provides flexibility to accommodate unique requirements on a per-environment, per-location, or per-cluster basis
- Offers compatibility with any automation tools you already use
Deploying Kubernetes Applications
The whole purpose of building clusters and deploying Kubernetes is to allow application workloads to be developed, tested, and deployed into production efficiently. However, Kubernetes only provides the foundation.
A lot of additional time and effort is required to create and maintain continuous integration/continuous delivery (CI/CD) pipelines to support software creation and deployment. CI tools such as Jenkins, CircleCI, GitLab, and Azure DevOps and CD tools such as Argo and Flux plus GitOps are commonly used in Kubernetes environments. Your organization may be using several of these tools already.
(To learn more about GitOps, read the blog GitOps Principles and Workflows Every Team Should Know.)
Each application workload typically needs to be deployed for dev, staging, and production—often with specific customizations for each environment. Even with the best tools, that requires separate pipelines—and unique application configuration files for each pipeline—adding complexity and manual effort. While it may be possible to write a script to generate custom configuration for each case, that’s one more unique solution to be managed and maintained. For production deployment, you may also need to deploy on dozens of clusters in different environments using blue-green, canary, or some other deployment strategy.
Kubernetes Cluster and Application Deployment at Rafay
Rafay’s Kubernetes Operations Platform (KOP) includes capabilities to streamline infrastructure, Kubernetes, and application deployments, addressing all the challenges discussed in this blog.
At Rafay we’ve codified Kubernetes best practices in order to streamline management of large K8s fleets. KOP simplifies cluster and workload deployments in data centers, public clouds, and at the edge with easy-to-use tools:
- Rafay Cluster Templates enable your team to quickly define infrastructure specifications that can be consumed by users to create clusters for testing or other purposes, enabling many routine cluster operations to be handled via self-service, while ensuring users don’t need a lot of detailed knowledge about the target environment. New clusters automatically adhere to the rules and restrictions you specify.
- Rafay Cluster Blueprints allow you to define and standardize key elements of a Kubernetes configuration—including add-ons and security policies—to ensure consistency and repeatability. It also notifies and optionally blocks changes in production.
- Rafay Workloads, Workload templates, and GitOps pipelines take the complexity out of application deployments. Pipelines support dev, staging, and production deployment needs, using workload templates to avoid the need for custom application manifests for each environment. You can quickly implement canary, blue-green, and other deployment models.
Keep an eye out for an upcoming Rafay white paper that will explore all of these capabilities in more detail.
Ready to find out why so many enterprises and platform teams have partnered with Rafay for Kubernetes fleet management? Sign up for a free trial.