Enterprises around the globe are modernizing their IT architecture and applications for many well-known business reasons, including boosting productivity, increasing agility, and reducing capital costs. By centralizing the delivery of Kubernetes (K8s) services, IT is able to standardize workflows, enable self-service, and optimize application delivery for multiple teams across the enterprise, such as engineering, QA, DevOps, security, and operations.
However, transitioning to a shared services model can present some common challenges leading to a reputation for being slow and bureaucratic. Governance at scale and standardizing your tools and deployment models can be difficult.
This blog explores what a shared services platform is and covers three critical requirements to successfully implement shared services for Kubernetes.
What is a Shared Services Platform (SSP)?
A shared services platform (SSP) allows multiple teams to run applications on a shared infrastructure that is managed, secured, and governed by a central platform team. Typically enterprises that reach a certain scale look to share specialized resources, increase efficiencies, and take advantage of economies of scale wherever possible.
Providing standards across common infrastructure and tooling enables organizations to automate workflows and accelerate delivery and access, whether for all employees or those within specific departments. Popular examples of modern enterprise shared services platforms include Salesforce.com, NetSuite, and Microsoft 365.
The goal of an SSP is to provide business efficiency by leveraging a common application or infrastructure that is set up once and subsequently leveraged across a large number of users or departments in the enterprise.
Applying Shared Services to Kubernetes
The concept of a SSP can also be applied to Kubernetes. In this case, the goal of a shared services platform for Kubernetes is to increase deployment and support velocity via self-service capabilities for developers and operations.
The enterprise’s Platform Team maintains centralized control of, for example, access control, deployment approvals, networking policies, compliance requirements, and cost management.
As described above, another benefit of an SSP for K8s is that it is set up and configured once by a central organization and provides a faster and much-simplified getting started process for all downstream users such as developers and operations professionals.
Critical Capabilities for Stakeholders
Another key and very important aspect of shared services is that it has to be able to provide a flexible and comprehensive set of capabilities for two important groups within an organization. For example, platform teams require governance, standardization while developers, operations, and SRE professionals require automation and self-service. Below is a detailed matrix of these capabilities.
|Overall Goal||Cluster Lifecycle Mgmt||Secure Access||GitOps Pipelines||Visibility & Monitoring||Policy Mgmt||Backup & Restore|
|Platform Teams||• Setup & Pre-configure shared services
• Govern & isolate usage
• Enable self-service throughout org
|• Create & maintain pre-approved cluster, app, add-on configs for reuse||• Enable downstream configurable access control to clusters and workloads||• Create multiple pipelines with defined workflows & approvals for depts.||• Enable multi-tier visibility & monitoring||• Define pre-approved cluster policies for reuse||• Create & maintain pre-approved backup and recovery policies to be used by the org|
|Dev, QA, Ops, DevSec (or by any approved org||• Develop, test, deploy and support quickly
• Self-service infrastructure components
• Access and use with isolation
|• Self-service cluster acquisition and usage from pre-approved list||• Connect users + groups to appropriate clusters and workloads||• Utilize one or more pre-approved pipelines for cluster and app deployment||• Log in and immediately view the health of clusters and apps based on role, group, etc.||• Choose the appropriate policy for clusters||• Select & assign the appropriate B&R policy per cluster and app|
Top 3 Requirements for a Kubernetes Shared Service Platform
It is crucial to enable a self-service platform that removes the complexity of Kubernetes with a single, easy-to-use platform with built-in automation, security, visibility & governance. This can be achieved with three key requirements for an SSP for Kubernetes.
1. Unified Platform
A unified platform of services enables centralized control of cluster and application lifecycle, the ability to select any best-in-class K8s distribution, and automated scaling of clusters.
2. Integrated Services
An SSP requires services to be aware of and integrated with each other, such as policy management, security, and monitoring. For example, a centrally controlled policy must be easily applied (and enforced) via cluster and application templates.
3. Cloud-based (SaaS)
A cloud-based approach provides accelerated adoption of Kubernetes shared services with guaranteed delivery and SLAs with the lowest possible TCO.
How Rafay Delivers a Shared Services Platform
As enterprises look to accelerate the pace of innovation, building a platform that increases developer productivity, centralizes governance and policy management, and reduces operational overhead can be challenging. Built-in the cloud, Rafay’s Kubernetes Operations Platform provides deep integrations with Kubernetes distributions such as those from Amazon (Amazon EKS), Microsoft (AKS), and RedHat (OpenShift), and delivers operational excellence empowering platform teams to manage, secure, and govern at scale within hours, not months or years.
Many of our customer’s platform teams have or are in the process of building an SSP for their organization to manage the fulfillment and operations of all Kubernetes-related services, while enabling self-service for the rest of their organization, in particular developers and operations.
Rafay allows you to focus on your applications, not on managing and operating Kubernetes.
Key benefits include
- Standardization of cluster and application configuration
- Centralized & automated cluster and application provisioning
- Unified interface for streamlined cluster lifecycle management
- Efficient and repeatable DevOps workflows
- High level of control, security, policy compliance, and audibility
- Developer, QA, and Ops self-service enablement
- Better collaboration between developers, QA, DevOps, and SRE/Operations
- Faster issue triage and lower support costs (thus lower MTTR)
- Much lower TCO for K8s management compared to multiple, siloed services
To learn more about how Rafay and AWS can help you deliver Kubernetes as a Shared Service for your enterprise, watch our recent joint webinar Providing Shared Services for the Enterprise.
Ready to find out why so many enterprises and platform teams have partnered with Rafay for shared services? Sign up for a free trial today.