Online / 5 & 6 February 2022


Challenges and Opportunities in Performance Benchmarking of Service Mesh for the Edge

As Edge deployments move closer towards the end devices, low latency communication among Edge aware applications is one of the key tenants of Edge service offerings. In order to simplify application development, service mesh architectures have emerged as the evolutionary architectural paradigms for taking care of bulk of application communication logic such as health checks, circuit breaking, secure communication, resiliency (among others), thereby decoupling application logic with communication infrastructure. The latency to throughput ratio needs to be measurable for high performant deployments at the Edge. Providing benchmark data for various edge deployments with Bare Metal and virtual machine-based scenarios, this paper digs into architectural complexities of deploying service mesh at edge environment, performance impact across north-south and east-west communications in and out of a service mesh leveraging popular open-source service mesh Istio/Envoy using a simple on-prem Kubernetes cluster. The performance results shared indicate performance impact of Kubernetes network stack with Envoy data plane. Microarchitecture analyses indicate bottlenecks in Linux based stacks from a CPU micro-architecture perspective and quantify the high impact of Linux’s Iptables rule matching at scale. We conclude with the challenges in multiple areas of profiling and benchmarking requirement and a call to action for deploying a service mesh, in latency sensitive environments at Edge.

The pervasiveness of Edge computing and Service Mesh constructs within a cloud native environment have almost been at the same time during last few years. Requirements of Edge compute to be able to unify both Information & Communication Technology (ICT) and Operational Technology (OT) have brought together cloud native deployments and microservice based service offerings to the Edge infrastructure]. While Kubernetes been the most popular model of deploying cloud native infrastructure to offer software services, service mesh is the emergent application deployment paradigm that decouples application from developing most of the software defined networking aspects of microservice interactions. This paper introduces features of service mesh that are architecturally suitable for Edge compute service offerings and application development principles. To understand applicability of service mesh, architectural principles need to be understood to figure out suitability of various benefits mesh benefits to customized Edge deployments. This talk introduces and correlates various Edge requirements to the service mesh’s architectural guidelines. Then further dig into deployment considerations of service mesh with Edge deployment types to provide practical communication challenges between the two. This talk: - Provides benchmark tests and their results that provides the impact of service mesh on simple Kubernetes based deployments using Istio & Envoy as service mesh and its sidecar proxy, that can be leveraged for Edge environments. - Provides detailed analysis of the software used to identify bottlenecks using Top-Down Microarchitectural Analysis and CPU Hot Spot analysis. - Summarizes the gaps identified during the detailed testing of these open-source components - Showcases the impact of utilizing service mesh for edge computing.


Photo of Sunku Ranganath Sunku Ranganath
Photo of Mrittika Ganguli Mrittika Ganguli