As organizations embrace microservices based architecture, they run into challenges in terms of authentication and authorization, traffic routing between various services or versions, load balancing and encryption. Deploying a dedicated infrastructure layer to applications like a service mesh in Kubernetes reduces the complexity associated with enabling the aforementioned challenges in a microservice architecture. A good example is the recommendation published by CISA as part of the Kubernetes Security Hardening Guide around encrypting service-to-service communication. Adding the ability to do this into the code for each and every service becomes nigh impossible at scale, service mesh offers a way to do this transparently without any changes to the applications.
What is a Service Mesh?
A service mesh typically is a sidecar arrangement of proxies through which all traffic movements, both east-west and north-south, can be observed and controlled. Few of the areas that a service mesh excels are around network visibility and observability, security, traffic routing and load balancing, deployment strategies, fault tolerance and testing.
How does a Service Mesh work?
A service mesh broadly comprises 2 sections, a data plane and a control plane. The data plan is a set of proxies attached to containers/pods/services that acts as communication portals in and out of a pod. The control plane manages and configures the proxies to route traffic.
In the above image if Service A wants to communicate with service B, it is directed to the sidecar proxy for service A which communicates with sidecar proxy on service B which communicates the message to service B. The traffic between these sidecars is configured and managed by the control plane; meaning, you have complete control over it.
You can achieve a host of capabilities including, but not limited to, communication security like mTLS authentication and authorization, traffic management like load balancing and failovers, and deployment strategy controls using this topology.
Why Service Mesh Can be Hard to Handle
A DIY approach is considered by many when venturing into service mesh but it comes with few hurdles. There are many OSS options for service mesh to choose from. The most popular ones are Istio and Linkerd. Some of these hurdles are:
- Keeping up with the enterprise- level nuances and changes takes away developers’ time and focus all the while posing deployment and enterprise QoS risks.
- Installing and maintaining a DIY service mesh requires significant time investment and requires a high level of skill which is hard to find and harder to retain, not to mention expensive.
While initially free, the growing time, complexity and resource cost Overall, the “cost of free” of open source service mesh can becomeis prohibitively expensive and distractdrives organizations down the path of excessive focus on with a non-core competency.
Make Service Mesh Easy with Rafay’s Managed Service Mesh Manager
Rafay’s Service Mesh Manager is built on top of Istio, a comprehensive CNCF service mesh. Rafay’s offering dramatically reduces the installation and operational burden making it a checkbox type experience. It also enables a policy model that ensures that configurations are controlled based on role (e.g. platform teams can mandate use of mTLS, Application owners can configure traffic management for their policies. Visibility to traffic flows is centralized and is also controlled by role (e.g. Application owners can only see traffic flows for the resources that they own).
Rafay’s Service Mesh Manager can be managed via an intuitive UI, API, RCTL or GitOps based approach and allows platform teams to easily implement Istio without sacrificing governance, access control and QoS.
With Rafay, Istio is managed for you, meaning there is nothing to install, patch, integrate and manage over time other than the policies you want to enforce.
Integration and Standardization of Service Mesh Policies
The primary driver for any organization to evaluate or adopt service mesh is security. Having to ensure compliance, like SOC2, requires setting up a service mesh, defining policies, enabling them on each cluster and monitoring for drift and course correcting appropriately. This places a massive burden on on platform teams requiring continuing investment in building and maintaining bespoke DIY tooling.
Rafay Service Mesh Manager removes the complexity and makes it extremely simple for platform teams to implement Istio. Platform teams can utilize blueprints to standardize deployments across a fleet of clusters. Enabling strict mTLS communication for a namespace is as simple as just selecting the “enable-strict-mtls policy”, which is a turnkey policy in Service Mesh Manager for that namespace (shown below):
Transparency is the key for good governance. Surfacing information to the right audience at the right time can be the difference between a 5 minute fault recovery time to service down for a day. Rafay Service Mesh Manger offers out-of-the-box observability with dashboards to monitor service-to-service communication in real-time and retrospectively (shown below). Observability is also access controlled based on roles with Rafay’s Zero-Trust Access Service so platform teams can unblock and enable a self-service model for developers.
Service Mesh is an essential part of securing and operating microservices-based applications in a Kubernetes environment. Using Rafay to manage Istio empowers platform teams to do it at scale, with proper access controls that are aligned with your organization, and added real-time and historical visibility without the installation and integration headaches.
Ready to find out why so many enterprises and platform teams have partnered with Rafay to centralize Kubernetes network policy management? Sign up for a free trial.