The Kubernetes Current Blog

When Deploying Apps Closer To Endpoints, The Obvious Isn’t So Obvious

Forbes Council

This blog posting originally appeared in the Forbes Technology Council.

Creating a network of edges is a much harder problem than it appears. I know because I talk with enterprise and service providers every day about this challenge.

I’ve seen numerous press stories recently on edge computing, and nearly all describe the drivers, players, use cases and economics of the edge. But why?

Why should you care about edge computing?

As the CEO of an edge computing platform company, I believe the primary reason why edge computing holds promise is that it enables you to run your apps closer to end users. That means latency can be reduced and the performance end users experience when accessing their applications will be snappier. It also means that certain interactive experiences that are currently technically challenging to deliver due to low network latency targets may now be possible to implement. Edge computing may soon make it practical for an entirely new generation of consumer and industrial applications to exist. And your favorite ride-sharing app, which may seem sluggish when it’s showing you real-time location information for the car that’s on its way to pick you up, could provide a much snappier end-user experience.

That brings us to the following question: If you have established that you need an edge, what does it take to build it? Perhaps you’re thinking: “Easy; just stand up a server or two in a number of far-flung data centers.” (More likely, it will be hundreds of distant data centers or colocations.) That’s a start, but it’s way more complicated than that. Below, I’ll discuss some of the obvious and not-so-obvious things you’ll need to do to build out a scalable application delivery platform that exists at the edge.

The Obvious

There are many elements of building out a physical edge that are self-evident to IT or cloud practitioners. Some of these are listed below, along with some topics of consideration for each.

Physical infrastructure: Which hosting companies should you partner with and where? How do these providers bill for your power needs per edge?

Servers: How many servers do you need in each location? What specifications (CPU, memory, disk and interfaces) do you need to plan for? How do you get these servers shipped to various countries? What happens when one of the servers (or sub-components) fails?

Network and bandwidth: What type of network peering relationships are needed globally to ensure connectivity across disparate carrier and telecommunications networks?

Storage: Is the data that is being saved on a local disk secure? Have you considered encryption requirements?

People: Do you have access to personnel for on-site installation, maintenance and troubleshooting services?

Moreover, organizations need a certain level of knowledge about the distant places their edges will be installed. In many cases, it’s not a technology issue as much as it is about process and getting in front of the right providers in the right regions of the world. In my experience and opinion, this — more than the technical expertise — is frequently the most difficult and costly step.

I think we could all agree that executing the aforementioned in hundreds or thousands of remote locations is a huge challenge. If you want to do this, consider the not-so-obvious aspects of building and deploying edges. This is where things can get really hard.

The Not-So-Obvious

Since founding our company, my co-founder and I have been grappling with the hard problems of deploying and enabling edge platforms as a service. This has proven to be both challenging and exciting, as making complex systems easy to consume is clearly very hard.

Here are some things to consider:

• How will you securely deploy an application image (e.g., a container) to every edge in your network and run them as needed? How do you do this several times a day if the application developer needs to apply fixes or add enhancements?

• How will you ensure that your application is running in a location close to your end users without amassing a massive infrastructure bill? In other words, how do you maximize your application’s global footprint while minimizing your cost?

• How will you remotely manage a large number of computer clusters, assuming each edge is some container-specific environment?

• How will you implement network and resource isolation so no one workload impacts another?

• How will you debug your remote application when (not if) things break?

• How will you make sure everything works in concert, on a global basis?

• How will you hire the personnel with the necessary technical expertise (bandwidth and global peering networks, hosting contracts, supply chain and shipping of servers worldwide, in-region troubleshooting, and so on) to build and maintain all this?

This is just a partial list.

Readers who have attempted this effort know that even seemingly straightforward tasks, such as deploying application images across tens of global locations, can be akin to building a software distribution system from scratch.

And readers in the know are well aware that ensuring cryptographic artifacts (e.g., SSL private keys) are not accessible “in the clear” on a remote server in a distant country is a complex task. This will require developers skilled in security best practices and with knowledge of key vaulting methodologies.

Clearly, when it comes to globally scaling applications and edge deployments, the not-so-obvious issues are multi-layered and complex and may be an unattainable task for some organizations. Many can, and will, attempt this with varying degrees of success. But similar to the way organizations eventually shifted away from maintaining their own data center footprints, there may be room to cut out many of these complicated steps. The challenge for innovative organizations is to find ways to continue drawing on the benefits of edge computing while minimizing the resources — and steps — necessary to do so.


Apps , Blog , Edge Computing , low latency , Microservices , Orchestration

Trusted by leading companies