The Future Of Edge Computing #1

Shon Paz
8 min readOct 22, 2022
Photo by Alan Tang on Unsplash

Introduction

As technology keeps its exponential growth both in the business and innovation fields, There are many growing use case that challenge the way we think about architecture today.

The cloud is here to stay, but there are some use cases that won’t entirely suite the architecture design that the cloud offers. Having the ability to process data in a distant place without having to worry about saturating our local resources is a big advantage for some, but a big pain for others.

A Bit Of A History

During my last two years at Red Hat, I’ve been strongly exposed to an exciting world called Edge Computing.

To be honest, Edge Computing is not a new term and we’re quite used to it as consumers (although we’re not specifically exposed to it) — Many companies place a “smaller piece” of their software and infrastructure stacks closer to the end consumer, in order to reduce the round-trip time (can be also referred as latency) of their customers’ requests.

With Edge-driven architectures, we as consumers, don’t have to go through the central cloud (that can be quite far physically) and wait a bunch of time in order to get our response. One of the eldest implementations of such architecture is called a “CDN” (Content Delivery Network) and many cloud providers and private companies still use it widely today.

In this article, I’d like to go through what in my opinion will be the Future Of Edge Computing. We’ll cover the different Edge use cases offered today, or will be offered in the future, and understand the architectural building blocks that represent the base of those solutions.

For this part of the article, we’ll focus mainly about the building blocks of the modern Edge world.

Bare with me.

Infrastructure As An Essential Fundamental

By 2025, 75% of data will be processed at the edge, outside of traditional, centralized data centers and the cloud.” (Gartner)

Keep that thought.

Before we dive into more technical stuff, It’s important to realize that infrastructure and architecture design are probably the most significant players in this game. If we won’t have the lightning-fast network, with sophisticated software stacks and algorithms we’ll probably end up reaching some physical barriers that we won’t be able to go through.

In todays world, we expect everything to be instant. We have everything in our phones, tablets, smart TVs and we expect applications work lightning-fast or we’ll leave and use other apps/providers.

This, makes infrastructure and software affect the business side directly.

Think of it — We don’t go to the bank, order a flight or schedule an appointment with our dentist physically anymore, everything is being used remotely via applications.

As we kind of got used to that instant experience, latency and user experience affect which applications we choose to use. There’s a direct connection between the time we spend in ones application and the profits it brings to application providers.

Now that we have the theory in place, let’s dive into some more technical details.

  1. Ultra-Fast Networking Stack
Photo by Alina Grubnyak on Unsplash

One of the most significant enablers for Edge-based service offering is probably 5G.

We’re all familiar with older generations of data transmission (1G/2G/3G/4G) where each one of those brought new capabilities to the internet and mobile world in terms of latency and throughput. As more generations became a standard — we were able to move more data and faster.

From an architectural perspective, those standards are built upon RANs (Radio Access Networks) that rely on the fact that our Edge devices can communicate with radio towers, which are located nearby, using radio waves in a specific frequency. Each standard has its own coverage, but the more we get closer to such a tower — the better our experience will be.

https://fiberguide.net/photonic-band-gap-fibers-could-help-mitigate-the-latency-challenge-in-5g-transport-networks/

Due to the way 5G is built, In order to get the ultra-low-latency speed that we’re looking for, we’ll have to have much more of those radio towers closer to each other. This behavior forces Telecommunications (or Telco) companies to re-think their architecture.

Telco’s role in the Edge game will be to provide their networking services (CDN, Firewall, IP Allocation, Routing) in a more scalable manner, so that all other applications (such as IoT, Streaming, Gaming) will be able to rely on an ultra-fast, resilient networking layer.

To do so, Telcos have been using VNFs (Virtual Network Functions) and are now shifting towards a cloud-native approach with CNFs (Container Network Functions) — in order to deploy their services in an agile way, at a bigger scale (for example, tremendous amount of 5G network devices globally) as part of their SD-WAN.

You can think on VNFs/CNFs as tasks, where each one of them offers a capability (routing, for example).

As using physical devices for those capabilities isn’t scalable, a solution was proposed by turning those tasks into virtual functions initially (based on Virtual Machines) and now containerizing them (based on Containers).

https://www.whiteboxsolution.com/blog/6-examples-of-nfv-use-cases/

As VNFs/CNFs are based on Virtual Machines/Containers Telcos will be able to use cloud infrastructure for their networking services. Combined with 5G’s capabilities, a new and richer offering will be proposed as a platform of networking services.

2. The Evolution Of Data

Photo by Claudio Schwarz on Unsplash

The second significant enabler is undoubtedly the Data field.

As said previously, we need data to be moving ultra-fast based on new networking standards that are being developed and implemented these days. We rely on an underlying networking stack to provide us the ability to process and ship data fast anywhere and everywhere.

But, as we’ve mentioned before there are use cases where we can’t wait for data to be sent to the cloud and back, no matter how longs it takes. We need to process things locally to get an immediate answer — For example, if a patient in a remote clinic is starting to feel unwell, we’d like in-house algorithms to detect anomalies, and we might won’t have a few seconds to process data, analyze it and come up with an answer.

Data world is changing — organizations are trying to shift away from data silos to distributed and scalable data fabrics, and privatize components on a per-use-case basis.

This approach (Data Mesh, or Data-As-A-Product) resembles DDD offerings (Domain Driven Design), by decoupling each one of the different Data Products within the organization into smaller chunks that have a specific responsibility (For example, A search engine, recommendation system, etc are different Data Products) and each will have a different architecture and chosen products.

This approach, based mainly on containers for development and deployment ease will help organizations deploy their data pipelines/AI algorithms as they were yet another application, a microservice and of course ease the deployment of their services to the edge (For example, a music streaming application, that offers a P2P cache layer for user preferences).

https://martinfowler.com/articles/data-mesh-principles.html

Using the flexibility and agility of containers, data workloads can be spawned and automated quite easily whether we talk about data pipelines (ETLs/ELTs) for data engineers or training/inference for Data Scientists.

This focuses data personas less on infrastructure and more on improving data quality and algorithm accuracy.

3. Automation, Orchestration and Agility

Photo by Lalit on Unsplash

As you can see in previous sections, we talked about the fact that both Edge and Data world are becoming software-defined and pave their way towards containerization to ease the process of development, testing and deployment.

The movement towards a unified standard, where all workloads are considered the same, and all are being accessed via the same API empowers the need for end-to-end automation solutions such as Ansible, Terraform, Helm, Kubernetes Operators.

Let’s take the Telco use case as an example — imagine those CNFs have to be available globally, in different regions and edge locations at a massive scale. The focus shifts towards idempotency, configuration management, infrastructure-as-code for mass production of services.

In fact, an end-to-end automation solution is crucial for most Edge use cases. Infrastructure is located externally to the central cloud, in locations that can’t be reached that easily.

organizations should have the ability to deploy new versions of their workloads, re-deploy in case of failures and patch new security vulnerabilities quite easily, in order to reduce TTM and focus more on improving their customer’s experience.

Supply Chain Security

Photo by Scott Webb on Unsplash

Deploying our applications to the edge, to places that aren’t reachable physically (at least for the most part), makes supply chain security even more drastic. If we’re used to have workloads on-premise or in a public-cloud — our intuitive assumption is that physical threat is seamless as our servers are located in secured facilities on/under the ground.

With Edge, this assumption becomes a bit blurry. We have our infrastructure located in a distant place, where possibly the worst case scenario is that anyone (but us) could reach. It forces us to secure software, infrastructure and hardware stacks completely, shipping our solutions hardened as much as possible.

Let’s take autonomous cars as an example.

An autonomous car is an edge device, that runs one ore more AI algorithms in order to drive independently. In order to get driver/firmware upgrades and communicate with the central cloud, this car has a publicly exposed IP address.

This creates a risk — if someone would be able to put his hands on ones credentials, he might be able to get access to private sensitive data and use it badly to risk others lives.

As user data is our most important asset, we should take the steps in order to secure the following aspects:

  • Development Process — Using shift-left approach, static code analysis, container image scanning, runtime security
  • Internal/External Connectivity — routing, TCP ports, proxies, WAFs, certificates, identity management, MFA
  • Runtime — disk encryption, RBAC, permissions, online scans
https://link.springer.com/chapter/10.1007/978-3-030-41110-7_6

Conclusion

In this article we’ve discussed the major building blocks of a modern Edge-based service. It’s important to mention that in addition to all of the above, we’ll need a central location to control automation, governance, visibility, observability and more. This way we’ll be able to manage our entire edge infrastructure, end-to-end using a single pane of glass.

In the next part, we’ll dive deeper into how todays Edge use cases are implemented, and analyze what is needed from an architectural perspective in order for them to work/keep working as expected.

Thanks for your time, see ya next time :)

--

--