The Kubernetes Current Blog

All Things Kubernetes: Application Secrets – Part 1

This is part of a series of blogs on “All Things Kubernetes”. Each blog post will focus on highlighting the common issues that people encounter with Kubernetes and how they can overcome these issues. This blog was originally posted on Medium on September 14, 2019



First, I will describe Kubernetes’ built-in capabilities for managing secrets; how it can be configured and enhanced to operate in a secure manner. In Part 2, I will describe and demonstrate how Kubernetes’ Secrets can be leveraged and enhanced to support highly automated, multi-cluster application deployments using the Rafay Platform.



Applications require access to sensitive information such as passwords, private keys and tokens to operate.

For example, a container may need to use a database service to access and persist data. The database service will require authentication before allowing access. A container that needs to use a 3rd party service will expect the container to authenticate before allowing access.

It is a very poor security practice to embed these secrets into the container image or into the Pod specification. Any user or system that handles the images or the Pod specification will have access to the secrets. Rotation or revocation of secrets becomes impossible to implement.


Enter Kubernetes Secrets

Kubernetes provides a built in resource called “Secret” that is intended for storing a small amount of sensitive data. Storing sensitive data in a Secret object allows for more control over how it is used and reduces the risk of accidental exposure.

Once a Secret object has been created, Kubernetes makes it really easy for pods to access the secrets without requiring any special code or customization. A Kubernetes secret can be consumed by a pod in two primary ways:


Approach 1: Secrets as Environment Variables

Some containers will expect to receive this information as environment variables. For example, a database access username and password.


Approach 2: Files in a Volume

Some containers will expect to receive this information as files in an in-memory volume mounted on the pod’s containers. For example, a certificate and associated private key. Kubernetes creates a file per key, meaning the application has to read all the files during startup.

Note that from a security PoV, it is extremely risky to configure secrets using YAML and store these in a Git repository. These files will be accessible to unauthorized users resulting in unnecessary exposure and potential compromise.


Built in Security Controls for Kubernetes Secrets


Kubernetes automatically implements several built in security controls that provide a baseline level of security for Secrets. Some of them are listed below.

  1. One pod does not have access to the secrets of another pod.
  2. A secret is sent to a node ONLY if a pod is scheduled on that node and therefore requires it.
  3. The secret is stored in “tmpfs” and is therefore not written to persistent storage. This is a RAM backed file system that will not survive a node reboot.
  4. Secrets are currently limited to “1MB” in size. This is a defense mechanism to ensure that the apiserver and kubelet memory resources are protected from abuse.
  5. Once the pod using the secret is deleted, the local copy of secret is deleted as well.
  6. For Pods containing multiple containers, each container has to explicitly request the secret volume for it to be visible within the container.

Although these controls are important, they are by no means sufficient for production grade deployments.


Critical Security Controls via System Hardening

A wide spectrum of Kubernetes deployment models are available to users. Security via effective system hardening needs to be a critical aspect of any Kubernetes deployment. System hardening enables users to configure and optimize the platform and its native capabilities to reduce the surface area of attack.

When it comes to Kubernetes Secrets, it is critical to focus on the system where data is stored and accessed. Kubernetes secrets are stored in the cluster’s etcd database. Etcd is a highly available, distributed and consistent key-value store used by Kubernetes to store data. The use of etcd ensures high availability. But, it also has to be substantially hardened because it is now a prime target for attackers.

Secure Storage

Harden the system by ensuring the following:

  • The entire disk underlying etcd is encrypted. This also makes it operationally easier to dispose the disks when they are no longer useful.
  • Secrets stored in etcd should always be strongly encrypted when written.
  • Strong key management for the symmetric encryption keys.

Secure Access to Ectd

Harden the system by ensuring that etcd is configured to require mutually authenticated TLS for access by clients.



In multi-master Kubernetes cluster deployments, the etcd servers listen on all interfaces. Therefore, limiting who/what can access etcd is a critical security control.

Harden the system by ensuring the following:

  • Access to etcd should be restricted to specific clients only. i.e. only the API server is allowed to connect to etcd.
  • Require the use of strong certificate based mutual authentication for access
  • Use a different Certificate Authority (CA) for protecting access to etcd from the one used for Kubernetes. This would deny access from non API server Kubernetes components to the etcd cluster.


Etcd Synchronization

In a HA cluster configuration, harden the system by ensuring that mutually authenticated TLS is required for all etcd “peer-to-peer” communication.



Block Inbound Access to Cluster

Ideally do not expose your API Server on the Internet. Enable “kubectl” access only to select, highly privileged administrators.

Configure firewall rules to block all inbound control connections to your VPC / Datacenter. In an upcoming blog, I will describe and showcase how you can implement this quickly to protect yourself against external threats.



Kubernetes is an extremely flexible and powerful framework that helps automate the deployment, scaling, and management of modern containerized applications.

Prioritize the “hardening” of your Kubernetes system and employ security best practices to protect yourself against misuse and abuse.

Employ a “Secure by Default” philosophy that ensures that “security best practices and critical security controls” are implemented right from the birth of the Kubernetes cluster.

Implementing system hardening the manual way is painful, expensive, error prone and leaves a massive hole from a security and governance PoV. Consider using a best of breed platform such as “Rafay” to automate and streamline this task.



Containers , Kubernetes

Trusted by leading companies