The Kubernetes Current Blog

Cognitive Cameras on the Edge

This blog was first published on LinkedIn.

Tracking widgets along an assembly line using cameras, or tracking virtually anything using video analysis, can leverage the edge to great effect writes John Dilley, Chief Architect at Rafay Systems, in his latest blog.

Imagine a manufacturing line building widgets, where you’re using video analysis to track the production process. You may need to view a widget from multiple angles, or as it moves along through the assembly line. How would you track a widget across cameras in such an environment?

Recently, I read an article about Cognitive Cameras [1] by Stacey Higginbotham, that described some of the possibilities and challenges facing computer vision, and suggested faster frame analysis rates were important. The author provides a motivational example and raises an interested distributed computing challenge:

Speeding up the frame rate at which computers can process images is just the first step. The next is to build software that can track an object between cameras in a network. For example, finding a person on one surveillance camera would allow the network to track that person as they walked in front of other cameras, automatically and in real time.

For that, we need fast image processing of complex models, plus software that will run across the camera network and can pick up the image. The goal would be to find a way to do this on a single network without sending data to the cloud. It would require an algorithm to recognize the person and another to track that person through physical space. It might also require a software overlay on the cameras or new communications protocols.

My mind went immediately to the object tracking challenge. How much analysis and decision making would you do on the camera or, in general, an end device connected to the internet? If cost is a constraint, as with any consumer device, the answer has to be “not much” – certainly not enough to support complex image processing at 60 fps. So what software overlay will help with this, keeping device cost in mind?

Before we answer that let’s look at another implied constraint, “The goal would be to find a way to do this on a single network without sending data to the cloud.” Stacey (the author) does not indicate why a single network, or why not send data to the cloud, but we can think of a few common reasons. Among them: data privacy, sufficient end to end bandwidth, and complexity. Let’s take the constraint of needing to keep the image data within the domain of a single organization – one with the resources to have a network, cameras, enough compute power to perform high-end image processing and the motivation to perform image analysis securely.

Now for the distributed systems architecture, my favorite part. Our options include:

  1. Attaching compute resources to the cameras directly, to perform the image analysis and share the results with adjacent cameras to track motion of objects across cameras, perhaps in a peer-to-peer fashion, or
  2. Embedding (privately owned) compute resources within the network, publishing image data to those compute nodes, and performing the analysis and tracking there.

These options mirror the definitions of the Device Edge and Infrastructure in the 2018 State of the Edge report [2].

Edge computing differs from cloud computing in a couple of key ways relevant to this discussion. As the name implies, the “Edge” is out there, near the users and endpoints that need to access content or compute resources. There are many more distinct Edge locations than Cloud data centers, each having many fewer compute and storage resources. The Edge can improve application performance due to lower latency from being nearby, potentially ingest and process more aggregate data, and in this case helps us improve the efficiency of computation since only local image data need be considered for object tracking: there is no need to look at data from distant cameras.

Looking at the two types more closely, the Device Edge approach provides compute and storage adjacent, and often directly attached, to each device. This presents two significant challenges. The heavy-lifting required on each camera for rapid video frame analysis adds a lot of cost to the system. Add to that the challenge and complexity of managing a peer mesh and making a distributed decision about objection motion between neighbors, which adds development and system management cost. Together it seems in this case the Device Edge may not be the right architecture for this task.

The Infrastructure Edge approach is a better fit. Each camera needs to find nearby compute resources, deliver the image data there, and perform object identification and tracking. If there are only a handful of cameras in a single location we only need one compute cluster. It gets interesting when cameras are distributed across many locations – this calls for an Edge Computing platform to support Edge location and request direction. And when there are hundreds or thousands of candidate locations across a provider network the system also must decide which compute apps to place in which locations: there is not enough compute and storage to put every application in every Edge.

These are the challenges that motivate us at Rafay Systems, where we have developed what we’re calling a “CDN for Microservices” – doing for applications and dynamic origin servers what the Content Delivery Network has done for content. If you’re interested in more about our approach feel free to email us ([email protected]) or check out prior posts in the Rafay Blog. We’d love your feedback.

[1] Cognitive Cameras, Stacey Higginbotham, IEEE Spectrum “Internet of Everything”, December, 2018.

[2] State of the Edge 2018: A Market and Ecosystem Report for Edge Computing, and related “Open Glossary of Edge Computing” and “Edge Computing Landscape”, are the result of a collaboration between Arm, Packet, Ericsson UDN (EdgeGravity), Vapor IO and Rafay Systems.


Apps , Blog , Cloud , edge , Edge Computing

Trusted by leading companies