BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Pentabarf//Schedule 0.3//EN CALSCALE:GREGORIAN METHOD:PUBLISH X-WR-CALDESC;VALUE=TEXT:Virtualization and IaaS devroom X-WR-CALNAME;VALUE=TEXT:Virtualization and IaaS devroom X-WR-TIMEZONE;VALUE=TEXT:Europe/Brussels BEGIN:VEVENT METHOD:PUBLISH UID:12571@FOSDEM22@fosdem.org TZID:Europe-Brussels DTSTART:20220205T100000 DTEND:20220205T103000 SUMMARY:What's coming in VIRTIO 1.2 DESCRIPTION:
The VIRTIO standard defines I/O devices that are commonly used in virtual machines today. The last version of the standard was released in 2019 and much has changed since then. This presentation covers new devices and features in the upcoming VIRTIO 1.2 standard.
There are 9 new device types: fs, rpmb, iommu, sound, mem, i2c, scmi, gpio, and pmem. We will look at the functionality offered by these devices and their status in Linux.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Virtualization and IaaS URL:https:/fosdem.org/2022/schedule/2022/schedule/event/vai_virtualio/ LOCATION:D.virtualization ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Stefan Hajnoczi":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:12592@FOSDEM22@fosdem.org TZID:Europe-Brussels DTSTART:20220205T103000 DTEND:20220205T110000 SUMMARY:Cross-platform/cross-hypervisor virtio vsock use in go DESCRIPTION:CodeReady Containers runs an OpenShift cluster on a laptop or workstation using virtualization. It's written in go, and uses KVM, HyperV or HyperKit depending on the OS it's running on. External network access is done through gVisor's userland TCP/IP stack which the virtual machine uses over virtio-vsock.
This talk will start with a short presentation of what CodeReady Containers is, explain why it needs a userland TCP/IP stack, but its main focus will be around virtio vsock, how to use it from go, and the differences to expect on the different hypervisors.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Virtualization and IaaS URL:https:/fosdem.org/2022/schedule/2022/schedule/event/vai_codeready/ LOCATION:D.virtualization ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Christophe Fergeau":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:12432@FOSDEM22@fosdem.org TZID:Europe-Brussels DTSTART:20220205T110000 DTEND:20220205T113000 SUMMARY:Introducing OKD Virtualization DESCRIPTION:OKD Virtualization is the community project bringing traditional virtualization technology into OKD. Meet the OKD Virtualization community and learn about it!
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Virtualization and IaaS URL:https:/fosdem.org/2022/schedule/2022/schedule/event/vai_intro_okd/ LOCATION:D.virtualization ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Simone Tiraboschi":invalid:nomail ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Sandro Bonazzola":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:12331@FOSDEM22@fosdem.org TZID:Europe-Brussels DTSTART:20220205T113000 DTEND:20220205T120000 SUMMARY:ToroV, a kernel in user-space, or sort of DESCRIPTION:This talk presents ToroV, a novel open-source technology that combines virtualization and containerization to enable the execution of users’ applications in a safer and improved manner. In ToroV, applications run as Virtual Machines without the need of an OS, unikernel nor device-model. ToroV combines a minimalist Virtual Machine Monitor and a virtualized guest program communicating through POSIX APIs. When the guest application requires to open or write a file, it just invokes the VMM using hypercalls. The VMM intercepts those hypercalls, processes the request, and returns to the guest. The sysadmin defines the ACL (Access Control List) of authorized hypercall per virtualized guest application. This allows the user to control the host’s surface that is exposed to the guest. In this talk, we propose to present the ToroV architecture together with several ongoing experiments. For example, the minimalist VMM allows us to boot up a VM in KVM in less than 6 ms. Also, we show how we debug guest applications by simply using GDB and the KVM API for debugging. During the talk, we discuss the main differences with unikernels and containers and how ToroV gets the best of both worlds. Also, we present the main differences with gVisor, which is a similar project from Google.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Virtualization and IaaS URL:https:/fosdem.org/2022/schedule/2022/schedule/event/vai_torov/ LOCATION:D.virtualization ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Matias Vara":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:12559@FOSDEM22@fosdem.org TZID:Europe-Brussels DTSTART:20220205T120000 DTEND:20220205T123000 SUMMARY:KubeVirt scale test by creating 400 VMIs on a single node DESCRIPTION:As the number of VMs per node gets larger, using more powerful nodes (i.e. with more CPUs and RAM), the scalability of Kubevirt's control plane becomes a bottleneck, slowing down the VMI creation process. This talk will cover the motivations and concepts around general benchmarking of the KubeVirt control plane, as well as explaining the journey to running a density test with hundreds of VMs per node.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Virtualization and IaaS URL:https:/fosdem.org/2022/schedule/2022/schedule/event/vai_kubevirt_scale/ LOCATION:D.virtualization ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Marcelo Amaral":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:12542@FOSDEM22@fosdem.org TZID:Europe-Brussels DTSTART:20220205T123000 DTEND:20220205T130000 SUMMARY:DevOps, Cloud Native, DPUs: beyond the buzzwords DESCRIPTION:Open Source virtualization is almost 20 years old. Obviously, things have evolved a lot in that time: the public cloud, new CPU architectures, new storage technologies, and more.
What about the real, on-the-ground usage? Sysadmins, Ops and Devops are not leveraging virtualization the same way today as they did before. But what really changed? In what direction is it evolving? Is on-premise open source virtualization still relevant today?
Through our own journey as engineers of an open source virtualization platform, we'll give you an inside look into what our users are requesting from us, and what we did to modernize our virtualization stack based on the Xen hypervisor.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Virtualization and IaaS URL:https:/fosdem.org/2022/schedule/2022/schedule/event/vai_future_evolution/ LOCATION:D.virtualization ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Olivier Lambert":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:12699@FOSDEM22@fosdem.org TZID:Europe-Brussels DTSTART:20220205T130000 DTEND:20220205T133000 SUMMARY:Isolating PCI/CXL Devices: It All Starts with System Launch DESCRIPTION:It has been well established that the integrity of critical systems must be rooted in the launch. Early works such as the Xoar architecture demonstrated the need for virtualized environments to begin with a lightweight, restricted bootstrap from which isolation of PCI management could be established. Since that time, knowledge of real IOMMU implementations and how to leverage them for system integrity has evolved. In this presentation, the new Hyperlaunch capability for starting hypervisors will be presented with a short discussion of the Xen implementation. The talk will progress to a discussion of how Hyperlaunch is connected with TrenchBoot (Linux Secure Launch) and Mandatory Access Control communication fabrics. With a focus on how it enables dedicated PCI management constructs that can provide secure and trustworthy isolation for PCI devices, with the potential for CXL devices. The talk will close with an open discussion on how hypervisors might unify around a common approach for IOMMU management.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Virtualization and IaaS URL:https:/fosdem.org/2022/schedule/2022/schedule/event/vai_isolating_pci/ LOCATION:D.virtualization ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Daniel Smith":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:12604@FOSDEM22@fosdem.org TZID:Europe-Brussels DTSTART:20220205T133000 DTEND:20220205T140000 SUMMARY:Automatic CPU and NUMA pinning DESCRIPTION:In FOSDEM 2019 we presented the addition of high-performance virtual machines in oVirt.With this new VM type, parts of the VM configuration were changed to improve the performance of workloads it runs.In particular, it was useful for CPU-intensive workloads, such as SAP HANA.However, better performance came at the expense of usability. Users were still expected to set various things manually, like CPU and NUMA pinning and hugepages.In this talk, I will guide you through our journey of simplifying and automating the settings of high performance VMs in oVirt.We'll see the evolution of the changes, the challenges we faced, where we are today and what's more to come in oVirt 4.5.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Virtualization and IaaS URL:https:/fosdem.org/2022/schedule/2022/schedule/event/vai_automatic_cpu/ LOCATION:D.virtualization ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Liran Rotenberg":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:12433@FOSDEM22@fosdem.org TZID:Europe-Brussels DTSTART:20220205T140000 DTEND:20220205T143000 SUMMARY:Network interface hotplug for Kubernetes DESCRIPTION:Design and implementation of dynamic network attachment for Kubernetes pods and KubeVirt VMs.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Virtualization and IaaS URL:https:/fosdem.org/2022/schedule/2022/schedule/event/vai_network_interface/ LOCATION:D.virtualization ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Miguel Barroso":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:12569@FOSDEM22@fosdem.org TZID:Europe-Brussels DTSTART:20220205T143000 DTEND:20220205T150000 SUMMARY:The story of adding TPM support to oVirt DESCRIPTION:oVirt is an open source virtualization solution based on kvm, QEMU and libvirt. Trusted Platform Module (TPM) device support, which brings new security capabilities that modern operating systems utilize or even require, was added to oVirt recently.
In theory, adding TPM support should be as easy as just adding a TPM device to the virtual machine libvirt XML. But features built on top of a lower-level virtualization platform are not always as easy to implement as they may initially seem to be. This talk will present the challenges experienced when adding TPM support to oVirt.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Virtualization and IaaS URL:https:/fosdem.org/2022/schedule/2022/schedule/event/vai_story_tpm/ LOCATION:D.virtualization ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Milan Zamazal":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:12674@FOSDEM22@fosdem.org TZID:Europe-Brussels DTSTART:20220205T150000 DTEND:20220205T153000 SUMMARY:Deploying VMs and Containers across Infrastructure Providers DESCRIPTION:This talk presents OpenNebula's new distributed Edge Cloud Architecture, which is composed of Edge Clusters that can run any workload (both Virtual Machines and application containers), on any resource (bare-metal or virtualized), anywhere (on-prem and on a cloud/edge provider). An Edge Cluster, built on open source technologies that already exist in the Linux operating system, is a hyperconverged functional set of managed objects that include storage, network, and host resources. An Edge Cluster provides all the resources needed to run virtualized or containerized applications. OpenNebula’s management services, including scheduling, monitoring and life-cycle management, run in the cloud Front-end and orchestrate from there the local or remote Edge Clusters. The Front-end also provides access to the administration tools, user interfaces, and API. Although the requirements may vary depending on the number and size of the clusters and API load, the Front-end node only requires 8 GB of main memory and 4 cores. The Edge Cloud Architecture is able to provide a lightweight and easy-to-use storage platform for medium-sized clusters consisting of tens of nodes. OpenNebula’s Edge Cloud Architecture is able to manage hundreds of these clusters, as they operate autonomously in terms of networking and storage, and handle thousands of virtualized hosts and tens of thousands of virtualized applications. In this presentation we will explain in detail the deployment model for Edge Clusters, the specialized storage solution they incorporate (OneStor), and the performance benefits of this multi-cloud architecture as confirmed by the latest benchmarks.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Virtualization and IaaS URL:https:/fosdem.org/2022/schedule/2022/schedule/event/vai_deploying_vms/ LOCATION:D.virtualization ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Alejandro Huertas":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:12586@FOSDEM22@fosdem.org TZID:Europe-Brussels DTSTART:20220205T153000 DTEND:20220205T161500 SUMMARY:Phyllome OS DESCRIPTION:Most Linux distributions are not designed to support desktop virtualization, and GPU vendors have failed to agree on a common way to let virtual machines access 3D capabilities (SR-IOV; vfio-pci; vfio-mdev or virtio-gpu...). The result is that it is still complicated to create fast and responsive virtual machines locally.
Phyllome OS is a Fedora Remix based on Fedora Server which attempts to make it easier to run virtual machines locally on computers that support hardware-assisted virtualization, using mostly paravirtualization (aka virtio-devices). It currently relies on existing technologies (libvirt; KVM/QEMU; virt-manager; GNOME Shell; etc), but will eventually implement its own virtual machine manager, package the cloud hypervisor as an alternative to QEMU, and use filesystem-level encryption to protect virtual machines disks. The main idea behind this OS is to treat the host, Phyllome OS, as a read-only system, i.e. as a mere appliance to host virtual machines. The presentation will give a demonstration of Phyllome OS in its current state.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Virtualization and IaaS URL:https:/fosdem.org/2022/schedule/2022/schedule/event/vai_phyllomeos/ LOCATION:D.virtualization ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Lukas Greve":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:12691@FOSDEM22@fosdem.org TZID:Europe-Brussels DTSTART:20220205T161500 DTEND:20220205T171500 SUMMARY:Hardware-accelerated graphics in secure multi-tenant environments DESCRIPTION:Hardware-accelerated graphics is becoming an essential part of modern computing environments, yet it is currently very difficult to impossible to use in secure environments such as Qubes OS. SR-IOV promises to solve this problem, but multiple problems have limited its adoption. This workshop is about these problems and about what is needed to solve them.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Virtualization and IaaS URL:https:/fosdem.org/2022/schedule/2022/schedule/event/vai_hardware_accel/ LOCATION:D.virtualization ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Demi Obenour":invalid:nomail END:VEVENT BEGIN:VEVENT METHOD:PUBLISH UID:12335@FOSDEM22@fosdem.org TZID:Europe-Brussels DTSTART:20220205T171500 DTEND:20220205T173500 SUMMARY:Tracing KubeVirt traffic with Istio DESCRIPTION:Software development has been gradually shifting from monolithic to distributed containerized applications. Such applications are composed of components referred to as micro services.With the increasing number of micro services, it becomes increasingly difficult to understand how all the components communicate.
This is where Istio service mesh comes into play. Istio allows developers to manage and monitor network traffic between micro services and by providing features like mutual TLS, request retries or request circuit breaking. Vendoring these features from Istio helps keeping micro services focused on the actual application logic as they don't need to be implemented by the micro services.The IT industry has broadly adopted this architecture, but there are still plenty of legacy workloads running in virtual machines, which can't easily take the advantage of the features provided by service mesh. At least not until recently when KubeVirt introduced support for Istio service mesh.
Attendees of this talk gain insight into the concept of the Istio sidecar proxy. A short demonstration showing typical use case of Istio service mesh -- canary deployment -- is presented. Next, this talk explains subtle differences of network traffic routing between regular Kubernetes pods and containerized KubeVirt virtual machines, leading to the challenges that these differences pose for traffic proxying.Finally, the changes necessary to support Istio for KubeVirt virtual machines are explained and the resulting functionality presented using the same scenario, but with the workload running in virtual machines instead of Kubernetes Pods.
The takeaway of this talk is understanding of routing concepts behind Istio proxy sidecar with regular Kubernetes pods as well as with containerized KubeVirt virtual machines. Audience will have a chance to observe typical use case of Istio with both pods and virtual machines and get insight into the necessary changes that made this possible.
CLASS:PUBLIC STATUS:CONFIRMED CATEGORIES:Virtualization and IaaS URL:https:/fosdem.org/2022/schedule/2022/schedule/event/vai_tracing_kubevirt/ LOCATION:D.virtualization ATTENDEE;ROLE=REQ-PARTICIPANT;CUTYPE=INDIVIDUAL;CN="Radim Hrazdil":invalid:nomail END:VEVENT END:VCALENDAR