The Kubernetes Current Blog

When It Comes To The Edge, We Need To Think Bigger

Think Bigger

Yes, Edge computing is a massive opportunity by any measure ($6.7B by 2022 according to this Markets & Markets report), but maybe we need to think bigger.

Many in the industry are applying their energies to use cases such as in-region data processing, fast rendering for multi-player games and AR/VR headsets, etc., which are extremely well suited to be deployed at the infrastructure edge. Interestingly, our early engagements with companies building latency-sensitive applications have highlighted a number of use cases that aren’t in the general industry discourse just yet.

I will write more about these use cases in a later blog, but here’s a quick tease:

  • The edge is the ideal place to deliver personalized (dynamic) content to end users, as an adjunct to CDN platforms that are finely tuned to deliver static content.
  • The edge is the ideal place to carry out API-level routing decisions for applications that have a wide footprint across IAAS environments.
  • The edge is the ideal place to enforce identity and security policies for SaaS applications being used by users worldwide.

I believe that over time, Edge computing will grow well beyond the near term needs of the IoT, gaming and cellular (5G) industries. The Edge’s proximity to endpoints will drive a massive shift in how all applications are designed and deployed.

Consider the following: When you visit your favorite retail site online, how much time do you spend browsing vs buying?

What if all your browsing activity could be handled at the Edge? The core could publish the latest inventory to the edge as needed, and all shopping cart logic could be maintained at the Edge. User identity validation could also be carried out at the Edge in a distributed fashion, further reducing the compute expended in the core.

Why would we consider doing this?

First, application performance will improve significantly. Not only will static objects be served from the Edge and Content Delivery Network (CDN) of choice, responses to inventory and authentication APIs will be delivered faster than ever.

Second, we can solve for a variety of data sovereignty requirements by maintaining user information in-region. The core never needs to receive any personally identifiable information (PII). The Edge can maintain all of that info (if so authorized) and only forward non-reversible hashes to the core.

Third, with the Edge handling all traffic ingress and only sending legitimate transactions to the core, the application security model can change in a big way. With reads happening at the Edge and writes happening in the core, we may finally have an opportunity to build an application security framework that works.

There are other good reasons to adopt such a model, and I’m convinced that all applications will, over time, morph to adopt the Edge. I encourage the networking community to think bigger about the possibilities of intelligently leveraging both the Cloud and the Edge. The resulting market opportunity of enabling all applications (not just IOT and 5G) to leverage the Edge can be 10x larger than what analysts are now forecasting.

The world is changing, and its changing fast. Again. And Team Rafay is excited to be building critical components that will enable these changes. If you’d like to keep track of what we’re up to, please do sign up for periodic updates.

Author

Tags:
Blog , Cloud , Edge Computing , Programmable Edge , Rafay , Rafay Systems

Trusted by leading companies