Brussels / 31 January & 1 February 2015


All your cycles are belong to us

Volunteer computing in the age of the ubiquituous browser

We live in an age in which every person owns two or three devices that are powerful enough to render a few seconds of 3D movies in a while although they are used mainly to send messages via WhatsApp and create selfies for social network consumption. At the same time, all these devices have the closest thing to an universal virtual machine that has ever existed: the JavaScript virtual machined, paired to a (roughly) standard object model that allows anybody with a bit of programming prowess to create programs that can be run anywhere and everywhere. There is great potential for massive distributed computing in this environment, but the ability to tap it a bit farther than sending basic computations is not there yet. In this presentation we will talk about the issues involved in doing so, from basic algorithmic issues to twisted legal issues that can arise when you are using devices you don't own to perform unwitting computations.

Volunteer computing was started thanks to lack of funds and surplus of computing power (and connectivity to go with it) on the hands of labs and companies. Stealth computing was proposed initially by Barabasi to take advantage of devices connected to the network that perform very basic computations. There's a very fine line between them, a line that includes a button that says "Click here" or the will to not use resources more than those dutifully allotted by the operating system. There are also usability differences between volunteer and stealth computing and maybe ethical and legal ones. However, from the point of view of the computing model and the techniques involved, there is no difference. In this talk we will explore the possibilities of stealth/volunteer computing using mainly the browser and the issues involved with it. The main one is performance, either from the algorithmic point of view or CPU-cycles-wise. How many CPU cycles could be achievable using them? Is it predictable in a way? And there is an intriguing edge to the performance model: it includes the fact that there will be a big dependence on the web page that is used itself, from its content, to its search engine position, to the way it is announced in social networks and by whom. Even as there is some predictability in the whole experiment, from the algorithmic point of view it is impossible to know, in advance, how long a particular user is going to stay contributing to the system and when he's going to come and go. This is an issue that has to be leveraged in the algorithm so that these cycles contributed actually add something to the whole experiment and do not detract from it. We will specially talk about population-based algorithms, which are the ones we do have a certain experience with, but any algorithm that has some dependencies (that is, anything that goes beyond brute-force approach) will experience similar issues. Finally, there are some trust issues that must also be taken into account. There are purely legal issues, but stealthy volunteer (or voluntarily stealth) computing should follow open science best practices, since that is the only way of managing a healthy and sustainable meta-computing device where people or citizens have a huge role.


Photo of Juan Julián Merelo Juan Julián Merelo