Online / 5 & 6 February 2022


Backgrounds are blurry, but the future is clear

More AI and ML in WebRTC applications

Machine learning models have made it to the browser. Virtual backgrounds and background blurs are everywhere! Many recent developments including Tensorflow WASM backend, smaller ML models, pre-trained model repositories have enabled widely used virtual backgrounds and backgrounds blurs.

This talk will explore how a simple background blur works, how developers can code their own blur for a WebRTC call, and most interestingly - what other ML/AI applications can be built using the same framework

Machine learning models and inferences are being run on the browser and have become very performant. Libraries and frameworks like mediapipe, bodypix have made it easy to integrate ML-based experiences into a WebRTC call with relative ease. We're still in the early days of discovering their possibilities in web applications.

In this talk, we will walk through how Daily integrated mediapipe in its WebRTC library. We will explore:

  1. How segmentation models, like the ones used blur backgrounds, work.
  2. How one can integrate an ML segmentation library (like mediapipe) in a WebRTC video call.
  3. What other sorts of custom video applications and experiences one can build using a tech stack like these.

Hopefully, this leaves the audience with ideas for writing custom video processors that might enable the next generation of video experiences on WebRTC.


Photo of Ravindhran Sankar Ravindhran Sankar