The Kubernetes Current Blog

Three Myths About Edge Computing

Forbes Council

Edge computing will fundamentally transform the cloud. In some camps, it is already assumed to be the default technology to address growing challenges such as reduced application performance, application distribution, improved security, data sovereignty compliance and hyper-personalization of web apps. However, there is a growing body of myths and inflated expectations swirling around the edge and its place in the computing ecosystem. To help clarify this increasingly complex web, here are few edge computing myths.

Myth No. 1: The Edge Is Only About IoT

Edge computing and IoT are often mentioned in the same breath. While there is a great dependency here, use cases for the edge extend far beyond the parsing and processing of data generated by “things.” Here are a few examples:

The edge can help filter and reduce the amount of data that needs to be sent to the cloud or data center for processing. This helps reduce the overall amount of traffic load on the internet and, for security, reduces an application’s attack surface. A smaller attack surface makes it harder for hackers and bots to compromise or steal data.

Hyper-personalization may be a buzzword, but when it comes to the edge, it has real meaning. By generating a dynamic, content-based user profile with preferences, location, time of day, previous interactions, etc., each web experience can be uniquely tailored to the individual. Popular companies such as Netflix, Amazon and Apple already deliver personalized web experiences. In the future, the edge will both enhance the degree of personalization and help enable hyper-personalized experiences to virtually any website.

For security, flexibility and cost-effectiveness, organizations are beginning to take advantage of multi-cloud strategies, with software assets being distributed over multiple cloud providers or multiple cloud regions from the same provider. But if you are the app developer, how do you ensure that users get to the right app environment to process their workloads? The edge is the perfect place to authenticate and validate end-user identities and enforce API routing policies, ensuring legitimate end-user traffic gets to its proper cloud environment.

These are just a few examples of situations handled at the edge that have nothing to do with IoT.

Myth No. 2: The Edge Can Be Built With Public Cloud Regions

Some proponents advocate building apps across all possible public cloud regions, believing that this constitutes an edge network. But edge networks are not a repackaging of existing infrastructure, and this approach leaves a lot to be desired.

In a recent blog, I wrote about how building an app across multiple regions and cloud providers is a massive, complex effort. Latency is a huge issue as end-user requests are forwarded across regions for processing. In nearly all cases, the architecture will be unique to the applications involved and will require expertise in the design and operations of cloud networking. There are just a few companies that can execute on this challenge today.

If, by chance, we are able to work around public cloud latencies, we have to deal with the fact that high availability across public clouds does not exist. The DevOps team would have to build this high-availability logic around the application. Furthermore, pipelines that deliver application artifacts (code, secrets, config, etc.) to all regions in parallel would need to be constructed and managed. And there are a number of complex operational requirements to manage multiple clouds as a single platform.

Fundamentally, attempting to use and represent public cloud regions as an edge network will fail.

Myth No. 3: Application Developers Will Move Entire Applications To The Edge

For core applications, the edge will serve an adjunct role to the public cloud or data center. In fact, 80-90% of the microservices that make up typical applications are not latency-sensitive. For example, the database that maintains end-user accounts does not need to be deployed at the edge, nor do the machine learning (ML) clusters running complex analysis on ingested data.

The edge is the best fit for latency-sensitive microservices, and those microservices can and should run there. Identity enforcement/validation, for example, is a latency-sensitive task that all SaaS applications could run at the edge for a superior end-user experience. The ability to run latency-sensitive microservices may spawn a new generation of apps that will be developed with end-user proximity/location as a crucial design element.

Moving entire applications to the edge is overkill and defeats the benefits of enabling and using the edge.

The Myths Are Fake But The Edge Is Real

With the introduction of new ideas and technologies, there is a sense of excitement and possibilities. But at the same time, there can be confusion introduced by well-meaning parties as we collectively struggle to integrate the new with the old. We hope that by debunking some of the more common myths, we have shed some light on the current state of edge computing and have provided a clearer understanding of how you can use edge computing in your own environment.

This blog posting was originally published by the Forbes Technology Council.


Apps , Blog , Cloud , edge , Edge Computing , Microservices

Trusted by leading companies