Backgrounds are blurry, but the future is clear
More AI and ML in WebRTC applications
- Track: Real Time Communications
- Room: M.rtc
- Day: Saturday
- Start: 15:00
- End: 15:45
- Video with Q&A: M.rtc
- Video only: M.rtc
- Chat: Join the conversation!
Machine learning models have made it to the browser. Virtual backgrounds and background blurs are everywhere! Many recent developments including Tensorflow WASM backend, smaller ML models, pre-trained model repositories have enabled widely used virtual backgrounds and backgrounds blurs.
This talk will explore how a simple background blur works, how developers can code their own blur for a WebRTC call, and most interestingly - what other ML/AI applications can be built using the same framework
Machine learning models and inferences are being run on the browser and have become very performant. Libraries and frameworks like mediapipe, bodypix have made it easy to integrate ML-based experiences into a WebRTC call with relative ease. We're still in the early days of discovering their possibilities in web applications.
In this talk, we will walk through how Daily integrated mediapipe in its WebRTC library. We will explore:
- How segmentation models, like the ones used blur backgrounds, work.
- How one can integrate an ML segmentation library (like mediapipe) in a WebRTC video call.
- What other sorts of custom video applications and experiences one can build using a tech stack like these.
Hopefully, this leaves the audience with ideas for writing custom video processors that might enable the next generation of video experiences on WebRTC.
Speakers
Ravindhran Sankar |