As the usage of Kubernetes grows, its complexities and risks also grow. In one of our earlier blog posts, Multi-cluster Kubernetes Management and Access, we spoke about the challenges of managing access to multiple Kubernetes clusters. We discussed how managing access can be a nightmare, especially when you have tens, hundreds, or even thousands of clusters, not to mention the number of people working in them! In such situations, deploying yaml files to manually configure roles for each person or service is a daunting task. You need to have a robust mechanism in place that will help you manage and secure access to your Kubernetes clusters.
In this blog post, we’ll first do a hands-on demo showing you how to leverage open-source tools available to develop your own access management mechanism. Then, we’ll discuss the potential caveats with our solution, and talk about some considerations when scaling access control to multiple clusters.
Security in Kubernetes
Whether you are accessing a local system or leveraging a cloud identity service, the process of authentication and authorization are common across the cloud native landscape. Authentication is the process of verifying the identity of the person trying to access the server. Authorization is the process of verifying that this person has been granted access to the resources they are requesting.
Before we deep dive into our DIY implementation, let’s take a look at the options Kubernetes provides out-of-the-box when it comes to authentication & authorization.
Kubernetes distinguishes between two types of accounts: user accounts and service accounts. User accounts are for humans. Service accounts are for processes.
User accounts are managed by a cluster independent service by means of private keys, user key store or even (unfortunately, still) a file with a list of usernames and passwords. Kubernetes doesn’t have any objects that represent user accounts. If a user knocks on a cluster’s door with a valid signed certificate, Kubernetes will allow them in, and check against its own RBAC policies to provide or restrict access to appropriate resources.
Service accounts are managed via the Kubernetes API. These users are bound to namespaces and can be created/modified by the API server itself, or by other accounts making API calls to it. Service account credentials are usually stored as secrets mounted into pods.
Along with the different types of users, Kubernetes also implements a variety of authentication strategies. All HTTP requests made to the API server must be authenticated by either using bearer tokens, client certificates or authenticating proxies.
Below are a few authentication methods that Kubernetes supports out of the box:
- X509 Client Certificates: allows you to pass signed certificates by a valid certifying authority by using the
- Static Token File: you can place static tokens in a file and use the
- Bearer Token: the API server expects an Authorization header with a value of
- OpenID Connect Tokens: OIDC is an extension of OAuth2 with an additional field called ID Token. In this case, the authorizer uses an ID token and not an access token.
- Authentication Proxy: allows the API server to identify the users from the request header values. Eg.,
These are just a few ways you can have a custom authentication mechanism in place. Refer to the complete list of Authentication Strategies offered by Kubernetes.
DIY Access Management using DEX and KubeLogin
To demonstrate some Kubernetes authentication strategies in action, we’ll go ahead and implement a custom access management tool. In this demo, we’ll mainly use two open-source projects: Dex and Kubelogin.
Dex: Dex is an identity service that uses OpenId Connect to handle authentication for applications. Acting as a gateway to other identity providers using connectors, Dex can defer authentication to SAML providers, LDAP servers, and well known identity providers such as Active Directory, Google and GitHub. A given client just needs to have the authentication logic in place to talk to Dex. Dex talks to all the service providers and handles everything else.
Kubelogin: Kubelogin is a kubectl plugin based on the Kubernetes OpenID Connect authentication. When you run kubectl, it fires the browser allowing the user to login using any of the identity service providers. It then gets a token from the provider which it passes on to the Kubernetes API server which eventually allows access to the user.
How it works
A user would fire up the terminal and run a normal kubectl command. This will trigger KubeLogin, which will open the browser. Kubelogin will be configured to use Dex along with GitHub (our choice for this demo). Hence the page will show the Dex portal with the GitHub as an option. The user will then choose one of the ways to authenticate, provide the credentials and login. Internally, Dex will communicate with GitHub and get the id token which will be passed to Kubelogin. Kubelogin will return this token to kubectl that will use it to authenticate the user with the Kubernetes API server.
Using kubelogin along with Dex. Image courtesy: Kubelogin/GitHub
Before we start, we need to create an OAuth2 application on GitHub. This will act as the identity provider. If you already have an identity service in place like LDAP, Google or any other service, then you can skip this step.
- Login to your GitHub account, and navigate to Settings by clicking on your profile picture.
- On the Settings page, scroll down and click on Developer Settings.
- Under OAuth Apps, click on New OAuth App and provide the following details:
- Application Name: Name of the application
- Homepage URL: https://dex.example.com:32000
- Application description: description for the app (optional)
- Authorization Callback URL: https://dex.example.com:32000/callback
It should look something like this:
4. Register the application.
Note down the Client Id and the Client Secret from the next screen. These are quired to configure Dex.
Using kubectl and krew, install the Kubelogin plugin on your cluster:
kubectl krew install oidc-login
Configuring API Server
Since we would be using an identity service provider, we need to configure the Kubernetes API server with the OIDC parameters. Use these flags to point to your API server:
--oidc-issuer-url=<a href="https://dex.example.com:32000">https://dex.example.com:32000</a> \
Before we deploy Dex to Kubernetes, we need to configure SSL for Dex and ensure that the CA certificate is available to the API Server.
Use the gencert.sh script to create an SSL certificate for the Kubernetes API server. The script command, with the help of OpenSSL, will generate certificates under the ssl directory. Note that by default, these are generated for
dex.example.com, so you’ll likely want to change it to your own domain.
These files need to be copied to a location where the API server can read them. Then, update the
Since we are using GitHub, we need to configure the connector accordingly. If you are using LDAP or any other identity provider, you’ll need to configure it accordingly. You can refer to this page on connectors.
Part of our dex.yaml config file will look like this:
- type: github
- id: YOUR_CLIENT_ID
Go ahead and deploy the Dex server:
kubectl create -f dex.yaml
At this point we have our GitHub app ready, Dex server deployed, and the API server configured to query the Dex server.
Testing and running our setup
Now that we have configured all the pieces together, let us go ahead and test the setup! Run the following command:
kubectl oidc-login setup \
This will launch the browser with the Dex portal. Choose GitHub and login. If it works, it means we have configured this correctly!
Whenever a user runs a kubectl command — eg.,
kubectl get pods — for the first time, Kubelogin will open the browser and ask the user to login. Once the user logs in using GitHub, the token is cached by Kubelogin for all subsequent logins until expiry.
GitHub’s OAuth2 Login Page
Here’s how it will look when you use Google as an Identity Provider.
Issues with the DIY approach
In a typical production setup, a one off Kubernetes cluster is rare. Enterprises today deal with multiple clusters that are spread across multiple locations and used by multiple users and hence demand a concrete security solution in place. Using a DIY approach might create more trouble in large multi-cluster environments.
Not Scale Friendly
The foremost issue with this approach is that it is not a scale friendly process. You need to configure all these tools on every cluster that you have. Doing this for a single cluster was fun, but imagine doing this for 10s of clusters… it would be a nightmare! Automating this is an option, but it will only add an extra process to manage toa system that is already complex by nature. This scalability problem grows exponentially when you scale the whole system.
Room for Error
A DIY access approach relies on manual configuration and yaml-writing. Any deviation in the configuration of this setup on any cluster can have detrimental effects. Best case scenario, some legitimate users won’t be able to login or access specific resources. Worst case, hackers get into your clusters and start mining cryptocurrencies.
Managing SSL certificates
Managing SSL certificates is painful. The number of certificates in the DIYapproach presented is directly linked to the number of clusters you have. The more clusters you have, the more certificates you’ll need to issue. And then once the system scales, you will need to have a system in place to automatically generate and renew the certificates…
DIY approach for managing K8s access is a great option for a single cluster setup or for a POC development. However, when it comes to deploying it on multiple clusters in an enterprise grade environment, things get tricky. Enterprises today demand solutions that are robust and have industry leading features like Secured Authentication & Authorization, Audit trails, IdP integration and zero-rust principles baked in. While there are a handful of tools available in the market, not all come with all of these.
Rafay is one such modern platform that comes with all of these features. Stay tuned for our upcoming blog post where we talk about Rafay’s zero trust offering and how it can help overcome the challenges of managing multiple Kubernetes clusters.