This blog was originally published on Medium.
This is part of a series of blogs on “All Things Kubernetes”. Each blog post focused on the common issues that people encounter with Kubernetes and how they can overcome these issues. This is part of a series of blogs on “All Things Kubernetes”. Each blog post focused on the common issues that people encounter with Kubernetes and how they can overcome these issues.
In Part 1, I described Kubernetes’ built-in capabilities for managing secrets; how it should be configured and used in a secure manner. Part 2 is focused on our learnings and insights from many of our early customers. I will specifically describe the significant operational and security challenges they were facing with secrets and how we collaborated to resolve them.
To put it simply, they wanted a robust, multi cluster secrets pipeline to ensure they can deploy applications across multiple Kuberenetes clusters spanning multiple providers. This, in turn, requires a high level of automation.
The Lifecycle of Kubernetes Secrets
It is widely accepted that an application’s secrets need to be created and managed independently of the pods that use them. This is critical, because if someone inadvertently gets access to your pod spec, they should not have access to your secrets.
As we saw in Part 1, sensitive data can be stored securely and separately as part of the Kubernetes “Secrets” resource. We also saw why it is critical to provide a secure, hardened Kubernetes environment so that this resource is not easily compromised.
The lifecycle of an application secret can be broken down into distinct phases.
- Configured: Developers and application operators need a way by which they can configure and store secrets in an intuitive and secure manner.
- Provisioned: The required secrets need to be created in the Kubernetes cluster before the application is deployed. This is critical because the pods will not become operational because the secrets are required as startup configuration.
- In Use: The application’s pods will continue to use the secrets as long as it is operational on the cluster. If the pods are rescheduled on another node or restart for some reason, they will need access to the secrets as part of the startup process.
- Deprovisioned: Once the application is deprovisioned from a cluster, it is critical for users to remember to delete the application’s secrets as well. Doing this manually on a single cluster is fine. But, this is impractical and operationally cumbersome to do this for multi cluster application deployments.
Security and Operational Challenges
Our customers described several operational challenges with secrets on Kubernetes that they had to be overcome. The most important ones are described below.
- Secure Configuration
Users that are familiar with “kubectl” know that it is an extremely versatile tool that can be used to inject secrets using a local file or literal values or a manifest file of kind:secret. Unfortunately, these approaches are not secure, practical or scalable for production grade deployments.
The vast majority of enterprises we work with also articulated that they found it impractical to assume that every developer or application operator will be familiar with Kuberentes, YAML or be given access to the API server via kubectl.
Rafay provides an extremely intuitive, simple to use and secure approach to configure and manage application secrets for their containerized applications. When users configure secrets, they can specify whether these will be made available to the pods as “Environment Variables” or “In Memory Volumes”. All the required Kubernetes resources are automatically generated by the Rafay Platform.
2. Secure Storage
The next step is securely store and distribute these secrets.
To reduce the risk of unintentional compromise, our customers wanted the encrypted secrets to be decrypted “Just In Time” before the application was deployed to the cluster. They specifically wanted to ensure that their developers only stored application configuration in their Git repository and not application secrets.
Behind the scenes, Rafay uses a secure, hardened, FIPS 140–2 certified hardware security module (HSM) to encrypt the secrets before storing it in a database. Your encryption keys are generated and used ONLY inside the HSM. The application secrets are delivered as “encrypted blobs” to the remote clusters and are decrypted only when required. This dramatically reduces the surface area of attack allowing our customers to operate mission critical, cloud native applications at global scale.
3. Automated Provisioning and Deprovisioning
Many of our early customers had applications that they had to deploy across multiple regions and across cloud providers. A few of them had a business requirement to deploy containerized applications to 1000s of remote clusters in retail locations.
It was both impractical and insecure for them to manually provision secrets to these many locations. They needed a secrets pipeline that would automatically provision and deprovision the secrets from the clusters “just in time” from the managed clusters.
Watch a video of how easy it is for a Rafay user to quickly configure application’s secrets. Also, notice that the secrets are provisioned “Just In Time” to the selected clusters and made available to the pods. Also notice that the secrets are automatically deprovisioned once the application is unpublished from the cluster.
This level of automation ensures that customers can comply with required application governance requirements and operate their applications in a secure manner.
Managing application secrets is not a trivial task. Doing this correctly is critical to ensure the security of your application deployments. The level of operational complexity is amplified several fold when your applications are deployed to multiple Kubernetes clusters.
Application operators should use a secure and highly automated secrets pipeline like what the Rafay Platform provides. Interested in giving the Rafay Platform a test drive? Sign up for a free account and check it out.