Predictable Network Traffic in Kubernetes
- Track: Network devroom
- Room: D.network
- Day: Sunday
- Start: 15:30
- End: 16:00
- Video with Q&A: D.network
- Video only: D.network
- Chat: Join the conversation!
The software applications of the Cloud Native era have a huge dependency on the network, these microservices are bound to a single concern and utilize the network to communicate with each other. The dependency on the network continues to grow as more and more microservices depend on it. However, there is no way to predictably leverage the network for the specific demands of your application. What if we could tag certain applications as needing a priority from the network. This would enhance the networking capabilities offered from Kubernetes and compliment the deployment of applications that require predictable behavior of the network.
More industries are migrating to Kubernetes and evolving their knowledge of the Cloud Native ecosystem. As this trend accelerates, apps with specialized requirements are emerging, such as high-priority apps. High-priority apps require predictable high performance, which can be difficult to achieve in clusters with 100s or 1000s of containers. They require and expect platform capabilities such as dedicated resources, less context switching, and efficient packet processing. In the context of the network, high-priority apps need to execute predictably, leaving no room for extra jitter. Pinning CPU cores may help with determinism but introduces a platform-specific mechanism that fails to embrace the abstraction of Cloud Native deployments. This presentation will cover application-specific queuing and steering technology that dedicates hardware NIC queues to application-specific threads of execution.
Speakers
Dave Cremins | |
Abdul Halim |