FOSDEM '08 is a free and non-commercial event organised by the community, for the community. Its goal is to provide Free and Open Source developers a place to meet.

   

Speakers

Interview: Kurt Pfeifle and Simon Peter

Kurt Pfeifle and Simon Peter are the authors of klik. At FOSDEM 2008 they will talk about the advantages of klik, and the architecture of the upcoming klik2.

You guys will dive into the technical details of Klik at FOSDEM?

Kurt:
Yes, we'll discuss how Klik makes applications 'virtual' to run from a signle file on multiple distributions.

Once we have any application 'klik-virtualized', it is liberated from its normal confinement. That confinement makes it nearly inseparately super-glued to a single machine, where all the multiple files that compose it need to obey and intermingle into a specific directory hierarchy layout in order to make it work.

Without the help of a sophisticated package manager software you can not rip the original application off that system any more, and you find it nearly impossible to transfer or copy the application to a different PC, especially if this PC runs a different Linux distro.

A single-file klik application image can be freely moved to a different location of the local file system and will happily run from there. Even USB thumbdrives or CD-RW media may be used to run klik-ified applications from.

The klik-ified application that is encapsulated in a single file can of course easily be deleted, backed-up or copied to a different machine. We can pass it to a friend in an e-mail attachment, transfer it to an other Linux distribution or even host and use it side by side with many different (and even conflicting) versions of the same or similar applications.

In a way that single file archive of the application is also a confinement, a container, but one that is no longer inseparately superglued to a specific computer and Linux distro version.

BTW, in the Windows world, this concept is also gaining traction recently, and there it is known as "portable applications" as well as "application virtualization".

What do you expect back from your FOSDEM presence?

Simon:
We'd like to showcase klik as a complement to traditional distribution-based package-manager-centric software packaging. Of course we hope to gain developers' interest and maybe even win new contributors.

We want to make absolutely clear that klik is not a package manager. It is not a competitor or a replacement for traditional RPM or debian packages Those are still needed and used in the base system and also as ingredients for klik.

Kurt:
We'd like to show how well klik2 currently works for different scenarios already, even though we have not officially released anything, and it is still in a pre-Beta development stage. Already now, klik can be useful for developers since it can provide a safe sandbox to test bleeding-edge, unstable applications without messing with the base system, and even to test multiple versions of the same application on one and the same system without installation.

Simon:
We are still lacking developer skills in a few areas. By way of our demos and talks at FOSDEM we hope to attract new contributors, for example to complement our existing Gtk/Gnome-based client GUI with one that integrates better into Qt/KDE environments.

Another example where we need help: KDE is well known for its deep inter-integration of applications. One of the challenges with KDE that still puzzles us is how to make a klik bundle use its own embedded version of the manual when you click the "help" button, and not to show the old version that may already be installed on the system (or none, if the system hasn't the app installed). Another is how do deal with the sycoca cache. We hope to be able to resolve this kind of questions by meeting and discussing with KDE guys at FOSDEM.

Virtualization and Live CDs are often used as alternatives for 'installation' these days. Could klik be the solution that solves the traditional problems?

Simon:
What Knoppix and VMware have done on a system-wide level, klik is doing on a per-application level. Live CDs are a great way to test and use Linux distributions. But out of necessity they can offer only a limited selection of pre-installed applications. OS virtualization doesn't have this limitation but it comes at a performance cost, as well as with an overhead of resource consumption. In many cases, it is not necessary to virtualize an entire OS just because of one application.

On the other hand, once you run an OS from a Live CD, you can easily add more applications to it by attaching a USB thumbdrive that contains all the klik application bundles you want to use with it.

Kurt:
Here are just a few more examples for different use cases:

  • User Anna's favorite distro X doesn't have the new application version CoolApp-123 in its repository. But Anna almost dies in craving for a testdrive of version 123 of CoolApp, and she knows that distro Y does have it... but she also does not want to change distro just for one application. Now klik-ify Y's native package of CoolApp-123, and -- voila!, you've got an application bundle that runs for Anna on X as well, and she can even mail it to her boyfriend Aaron who happens to run distro Z!
  • User Benny is a fan of testing Live CDs. He has a collection of more than 100 already, and he uses his favourite 5 Live CDS also for his day-to-day work. But he can never be sure of having all his favourite applications available on the CD he's currently booted. So he puts all his favourite klik bundles on a USB thumbdrive, and he can always use the same application that he likes and he's used to.
  • Commercial Company CopyCatCo used to be selling their proprietary software CopyCatApp on MS Windows and Apple Mac platforms only. But now they are exploring new waters with their first port to Linux. However, they don't like it that users expect them to build and provide comfortable package repositories for each and every major distro. Business sense tells them to go for one package format only, so they decide for an LSB (Linux Standard Base, ed.)-compliant RPM. Debian users are left out, right? -- Ha!, no problem for klik: it takes the CopyCatApp.rpm, converts it to a single-file CopyCatApp.cmg and that file can be run on practically all distros right away.
  • FOSS Developer Danny is still quite some months away from a final release of his new cool FooBarBaz software version that is a complete re-write of its very successful pre-decessor. FooBarBaz had a team of non-coding contributors: translators, art designers, usability experts, documentation writers. But these guys don't like to compile software from source code and they also don't have time to do it every day. However, they would like to start with working on their own contribution parts for FooBarBaz, and it would be very helpful if they could see the code Danny wrote so far in action. But Danny's resources are not sufficient for nightly builds of binary packages for each and every distro version that his team of helpers happen to run. A klik bundle to the rescue!, and everyone can testdrive the nightly build independently from which current distro he's on.
  • IT Magazine Publishing House ElElements likes to add gimmick CDs to some of their monthlies that contain test versions of new software releases, commercial as well as proprietary. While their Windows-using readers hardly ever have problems to install their .msi packages, the complaints from Linux users like "Why did you only provide a BlueHat 9 RPM?!?" are never-ending... So isn't it cool for them that they now can ship klik-bundles that run directly from CD, for any LSB-distro, without a need of installation at all? (They'll soon have to deal with complaints from their Windows users about that inconvenient installation process, but that we don't care about... :-)

The possibilities are endless. Does klik have disadvantages too?

Kurt:
klik is not a package manager. But that is by design. klik is intended for (add-on) applications (the stuff the end-user cares about), not for routine maintenance of the underlying core operating system (the stuff the distro vendor takes care of).

And since klik is not meant as a package management system, it also does not (yet) provide for features such as automatic updates and upgrades, which are the typical domain of system and package management software. klik places the responsibility for using klik bundles into the users' hands, but makes this task extremely easy.

However, there is no iron limitation that prevents anyone who has the skills and motivation to create an additional infrastructure and GUI applications to klik clients that make it easy to "upgrade" klik bundles. Maybe this is something for us to do at a later stage, but we currently do not have plans for this. One day someone might come and pick klik's basic format to build a complete distro architecture around it. But this is not our own intention right now as we believe the world doesn't need yet another distro.

Simon:
Some readers may ask: Doesn't the klik concept lead to many occasions of duplication, where certain libraries are provided multiple times inside the different klik bundles? Doesn't that lead to wasting harddisk space as well?

The answer is: you'd be surprised how rarely this type of duplication does occur, if the definition of "the base system" is done well. To make klik bundles work, klik of course needs to assume a certain "fixed" base system at the OS level with a defined set of available system libraries and applications. LSB-3.2 is a good foundation for that. Everything else what the application needs is packed into the klik bundle. Upon execution, some functions needed by the application are of course still provided by the base system.

And as for the "waste of harddisk space" -- let's look at some rough figures: a full, traditional installation of OpenOffice.org takes about 300 MBytes on the harddisk, with 4000 files spread across 400 sub-directories. A klik bundle of OOo takes less than 150 MByte encapsulated into a single file, and that file lets OOo start and run without major noticeable performance loss...

Which brings us to the performance topic. Sure, klik adds a layer of overhead for the application virtualization. However, the performance hit is almost not noticeable in most day-to-day use cases.

Kurt:
Also, if you discuss "disadvantages" of klik, please keep this in mind: klik is geared towards typical end-user applications with a GUI. While klik can handle CLI-only applications as well, it is not intended to be used for packages that contain libraries only, or for long running daemons, or for applications that need to bind to a privileged TCP/IP port, or that need root privileges in other ways.

For me personally, the main disadvantage of klik currently is its still un-finished stage. And the fact, that the environment where it can come to full fruition is only now getting ready: namely a fully matured LSB release that gets widely adopted by different Linux distros and provides a universally available platform where very widespread binary compatibility of traditional packages built by Debian or Red Hat or openSUSE Build Service contributors provide the tailwinds that lets the klik2 ideas sail ahead full speed.

Do you know of similar systems? Did you draw inspiration from Mac OS X?

Simon:
The very first Mac in 1984 basically followed the "one app = one file" approach. But the fundamental idea is even older. Heck, it is even the most natural thing to do and construe!

Kurt:
If you have an application that needs and uses many different files it is the most obvious idea to group those files into one common place, isn't it?

Against that background, we rather draw our inspiration from looking at and getting challenged by some of the old-age Unix gurus who pop up from time to time and try to teach us that the right way to do *everything* is to spread and sprinkle all the files needed for an application's execution across the entire system and harddisk.  :-)

Simon:
To be fair to the mentioned Unix veterans, since the times of the first Mac, complexity has very much increased and applications no longer consist of just one file. Complexity has increased up to the point that the user is no longer in charge of managing software, making package managers a necessity.

Also, to handle that complexity, Unix systems came up with a concept that group similar files from different apps into same places: config files go to /etc/, libraries go to /usr/lib/, executables go to /usr/bin/ (unless they are meant for root, when you put them to /usr/sbin/), etc. This system was designed for large machines with multiple hard disks, multiple users and a knowledgable system administrator, but not for the casual user with little clue and a personal notebook.

Kurt:
That concept works, and works well enough for all applications that are part of the base system, and as such are handled by the distro vendor tools and package managers. It would even work for all end-user packages forever if the Linux world consisted of only one distro, and everybody would develop and package for that one universal distro only.

However, in the real world this concept starts to get overly complex for end users who want to simply run new software that is *not* available for their own distro. Or if they need to or just want to run different versions of the same application on the same hosting system without a constant cycle of "shut down version A, deinstall version A, install version B, start up version B" administrative and repetitive, time-consuming work.

klik tries to dramatically decrease again that complexity for the user, by giving him back the "one app = one file" simplicity. When Mac applications became more complex, they were distributed in "application folders" or "disk images" (which get unpacked and expanded into application folders upon installation) instead of "one file".

klik however takes this one step further by offering a way to use "one file images" even as the default way of storing and using applications if the user likes this. If you don't want to go that far, you can use it just on occasions where you find it convenient to testdrive some interesting new application in a very fast and effortless way that does not mess with your base system should the application in question turn out to be too buggy.

Simon:
If you look a bit more closely at the internal structure of a klik application image, you'll see that it does not overturn "the Unix way" of file system layout at all. On the contrary, it mirrors and re-uses that very same structure internally for all the application's own files, but it archives them into a compressed ISO image and hides that internal complexity from the end-user who only needs to ever handle with the single file.

So in a way, klik is not at all a denial of the Unix concept and of FHS (Filesystem Hierarchy Standard, ed.), klik just applies it on a different abstraction level with the individual application as the topmost entity.

Kurt:
One more comment about "inspirations", and who inspires whom....

If you care to look around in the IT world outside FOSS and Linux, you'll discover a slew of Windows-based companies that currently work on proprietary software that is based on similar ideas as klik. But not for Linux, they do it for the benefit Microsoft operating systems.

I don't know any architectural or implementation details about any of them. I don't know if any of these has gone to the "1 app == 1 file" extreme yet, but they clearly speak of "application virtualization". They also want to tackle the "DLL Hell" that Windows is infamous for, similar as we do want to tackle the "packaging maze".

Oh, and for additional complication they can't limit themselves to an application's files only, they also have to virtualize the registry...  :->

Simon:
Windows application virtualizers (last I looked, Google found at least half a dozen of them) also want to put applications into movable containers which can be easily deployed to many different computers by means of simple file copy steps. Big companies are working on that, and they have started to take virtualization beyond the level of the complete computer or OS. Additionally, projects and companies that offer "PortableApps" or "U3" USB stick software have gained some popularity on Windows.

Kurt:
To us it rather looks like some of the activities we currently can observe in the proprietary world have drawn their inspiration from us, the Free and Open Source Software world.  :-)

My very personal opinion about virtualization on the application level is that, at least in the Windows world, we will see a similar effect as we are observing with virtualization on the OS or hardware level: it will open up a whole new family of use-cases. There will also be a time of this getting hyped, and it will be soon... Additionally, I'm sure we will see some scenarios where the benefits of concurrent virtualization on the OS as well as on application level will be exploited in a combined form.

Who is actually doing all the klik packaging? Are you maintaining your own community of packagers, or do you prefer to collaborate with the upstream projects?

Simon:
klik utilizes existing binary repositories, such as debian. XML files called "recipes" tell the klik client application where to fetch binary "ingredients" for a klik bundle from, and how to meld these ingredients into a single klik "app bundle" file.

The ingredients may be .deb packages from the official debian repositories or from elsewhere, they may be .rpms from Fedora's yum repos or from openSUSE's Build Service, or .tgzs/.tbz2s from Slackware, or .packages from Autopackage. They may even consist of a mix from several of these package formats. Ideally they are built on a LSB reference platform.

The result of processing these ingredients according to the recipe is a portable klik application bundle, a single file.

Kurt:
As you can see, typically, klik itself doesn't compile from sources, and it itself doesn't package anything from scratch. Therefore, klik does not make traditional packaging methods and skills obsolete. On the contrary: klik re-uses the work and the skills of existing packagers, and adds more value to their efforts by offering new use cases and wider adaption of certain packages.

So, in essence, klik is not "packaging" a .cmg (klik's file extension for a klik application bundle inside a compressed ISO filesystem) from scratch, it is rather "re-packaging" in most cases, and it uses pre-existing .deb, .rpm and other packages.

Simon:
Currently, most of the klik recipes are automatically generated by the klik server, and they have a very standard, uniform format. However, some need to be hand-tuned in order to make them fully work, or to make them work on a particular distribution.

Kurt:
Here we are looking for volunteers to maintain the recipes, ideally from the respective packages' upstream projects. If readers want to help, they are very welcome. The work involved here is mainly one of quality assurance for a few specific bundles that you adopt: make sure they work flawlessly on your own favourite distro; if it doesn't, help us find a fix and test that...

Simon:
BTW, a klik QA would at the same time prove and ensure the quality of the Linux Standard Base itself. To the extent that klik bundles run flawlessly on each LSB-compliant distribution, they are proof not only of the klik recipe XML's correctness, but also that the binaries inside the recipe's ingredient (native) packages were built in an LSB-compliant way.

Kurt:
At this point, it may be of the foremost interest to our readers to get a "helicopter view" of how klik works under the hood.

klik screenshot: Download and run flock?

If a user executes "klik get coolapp", the klik client contacts the klik server. The klik server then looks if it has a ready-made coolapp.xml recipe in its database already and sends back that in case it has. Otherwise, it looks if the Debian repository has coolapp available.

It then computes all the direct dependencies for coolapp (additionally required .deb packages needed to run coolapp), locates the exact download URLs for each package and puts them into the recipe. It also includes the package description from Debian. As an icing on the recipe cake, it searches the "Debtags" repository and adds all software category information it finds about coolapp into the XML file.

The recipe is sent back to the klik client. The klik client then simply "executes" the recipe, which means to fetch all the indicated "ingredients" from their original repository locations and to build the single-file .cmg bundle from them as a result.
This way the klik server does not need to cope with huge bandwidth demands of distributing pre-fabricated .cmg files. And we don't need to re-build packaged files on the server each time a recipe or an ingredient gets updated.

Simon:
This architecture of course offers a convenient way for upstream software developers and maintainers to provide their bleeding edge versions to beta testers and other end users without needing to find a packager for each and every distribution out there.

Developers can simply build *one* traditional package for a distro of their liking, and maintain the simple klik XML recipe (which can be hosted on the klik server) that takes care of making the klik clients on all other distros convert their original ingredient package into a portable klik bundle that runs everywhere.

Kurt:
Since the download of the ingredients can still happen from their original location, they even keep full control over their download figures and statistics if they want. This also allows proprietary applications to be used with klik, because we don't have to legally "distribute" them.

On top of our server-provided (and mostly auto-generated) recipes, anyone can of course write their own hand-tuned recipes from scratch easily and use the klik2 client to run them in order to create portable applications. We know that one of the two main developers of Beagle did just that last X-mas... And he did not even have to ask us for directions in advance. We just by accident stumbled upon his blog were he described how to use his recipe with the klik2 client.

Uhmm... maybe I should re-evaluate my earlier statement about "klik not being really suitable for for long running daemons" now? :-)

Simon:
We hope to win over many upstream projects to test, tune and update the respective recipes for their applications, especially if it concerns their own development versions. That would be a win not only for klik but also for the upstream projects: because they can simplify their packaging and consolidate it to the one distro they like most. Then klik cares for converting it to a format that is able to run in userspace on all other distros even without an installation as we know it.

A packager or a developer who happens to like niche distro ABC a lot and uses it and who creates good working LSB-compliant packages of some bleeding edge unreleased software for this favourite distro may see his work be used and tested by only a few users of that niche distro. With klik, he can effortlessly open his work to a much wider audience, and double, quadruple or even increase a thousandfold his user base in no time.

So yes, we are seeking intense cooperation with upstream projects as well as with creators of traditional packages, since we need the fruits of their work to make klik tick, and klik in turn makes their work more valuable and more widely available.

How does klik2 fit into this grand vision?

Simon:
klik2 is a rewrite from the ground up. Let's see if we can summarize all the changes in a few bullet points...

klik1 (which started in 2004, BTW) was more a proof-of-concept. klik2 will be the "real thing", now that klik1 has demonstrated the viability of all the base ideas of the fundamental concepts.

The klik1 client was writen in Bash. klik2 uses Python.

klik1 used zisofs and cramfs for image and compression. klik2 uses zisofs compressed ISO files.

klik1 used Xdialog, kdialog and zenity dialogs for user-facing GUI elements. klik2 will be able to provide fully-fledged GUIs based on Gtk/Gnome and Qt/KDE (and even Tcl/Tk, PyGtk, PyQt, FLTK, or whatever anyone who wants to write a klik GUI cares about.)

Kurt:
klik1 relied on loop-mounting the application images, and was restricted to a maximum of 7 concurrent mounts at a time. klik2 does take advantage of FUSE (since FUSE is now widespread enough) and can therefore run practically an unlimited number of klik applications concurrently.

Screenshot: Fuse filesystem on Ubuntu

klik1 based itself on Debian Stable, and only could guarantee to work on Debian. klik2 relies on LSB-3.2/4.x, will fully work on all LSB-compliant distros, and will even work for many applications on most other distros.

Simon:
One new thing klik2 will gain is the option to "sandbox" an application:

  • While klik1 apps didn't interfere with the base system, they could still mess with the user's "dot files" and personal data, if he was careless enough to run an unstable, bleeding edge application inside his standard environment.
  • klik2 however will be able to activate a sandboxing feature on demand, which will force any write access by the application to a separate directory, so that now even user data can be safeguarded against corruption or unintentional overwriting, like, when a bleeding edge Thunderbird is test-used with the help of klik. This will also allow one to carry around applications along with their settings, e.g., on a USB stick. Think "Portable Firefox" with bookmarks.

Kurt:
klik1 had to use an ugly binary patching technique when creating the .cmg in order to get rid of embedded absolute paths that were hardcoded into the binaries. With keeping these hardcoded paths we would not have been able to make the image re-locate-able for loop-mounting. klik2 uses the Union File system and libfakechroot to make applications portable and re-locate-able, even if they have hardcoded paths inside.

Simon:
klik1 recipes were simple shell scripts without verification. klik2 will sport a new recipe format that is based on 0install XML that embeds digital signatures for verification. In fact, we worked with Thomas Leonhard, the 0install developer, to use a common syntax for 0install and klik XML files. Maybe one day you will be able to just use klik2 XML recipes to drive a 0install-ation and vice-versa...

Kurt:
klik1 was never packaged as a client application by the major distros. With klik2 we hope to enter the time where the klik2 client will be shipped and pre-installed by default.

What will stay the same? The "one app = one file" idea. The ease of use for the end user. The degree of automatization for packagers. Our quest for adding more features by re-using the work of others in the FOSS community.

Klik is technically very attractive, and it does wonders for usability. Do you feel it could become even more popular than it is now?

Kurt:
Definitely, since it has merely started yet. Not too many regular Linux users have heard about it, let alone used it.

So far, klik has only been a proof-of-concept, experimental software. We hope to get the full, proper implementation of the idea in place with "klik2", and we hope for seeing a wide-spread adoption and embracing of klik2 by the FOSS community.

Then we'll be able to enjoy features on any Linux distro that are still far away for users of proprietary OS platforms.

Simon:
klik1 remained a proof-of-concept, known only to a few. With klik2 we hope to become more mainstream in the not too distant future.

Also, klik1 only worked reliably for 80% of users and 80% of applications. Had we had more manpower and had we invested more manual tweaking of our automatically created server-side recipes (basically, via separate #ifdefs for each major distro), we nevertheless could have achieved 95% figures even with klik1.

But now, with klik2 and LSB we hope to achieve 98% with much less effort in a very short time.

The first time you see it in action, klik seems to do magic... That makes you magicians :-)

Simon:
More like a Hogwarts students in FOSS wonderland... ;-)

Kurt:
Seriously, if you look a little bit closer at it, you'll quickly discover that the magic of klik is not so much in itself, but in the components it drags into its machinery and which it orchestrates to produce the end result: FUSE and Union mounting; server-side APT and dependency resolution; decentral client-side execution of klik's recipes with repackaging of ingredient .rpm, .deb, .tgz or .tb2 packages; the expert skills and the hard and industrious work of many hundred Debian and RPM packagers, whose products klik recipes use as "ingredients"; the people who create, maintain and continuously expand the marvelous "Debtag" database from which klik automatically draws package descriptions.

So a lot of peope will soon find their own work re-used by klik2 at some stage of klik's execution, and they may not have been aware of it before. All these should regard themselves as part of the "klik magician posse".  :-)

FOSDEM Live Streaming

Linux Magazine

Thanks to our Main sponsor Linux Magazine, the FOSDEM 2008 main track talks held in Janson will be available as a live stream on Saturday and Sunday.

Speaker interviews

One month to go until the event seems like a nice time to start reading up about the topics that will be covered at FOSDEM 2008.

Enjoy the first speaker interviews, and expect some more in the coming weeks.

Interview: Steven Knight

Steven Knight will present at FOSDEM 2800 about his project SCons.

What do you expect from your presentation at FOSDEM?

I mainly want to get the word out about how to use SCons effectively for Open Source development, and to get direct input on how to help improve SCons so it can be made even more effective for it. The biggest expectation I have is to get a lot of good, in-depth feedback from people with software build issues (both potential or actual SCons users and others). Finding out what problems people have or what doesn't work is more valuable than hearing if they like something I work on.

SCons is cross-platform. Is that a major feature for many users?

It seems to be. For example, I've heard of a fair number of gaming projects (both commercial and Open Source) that use SCons for development, reportedly because it makes it easier to manage development across the multiple platforms they want a game engine to support (multiple gaming consoles and a Windows version and...). The cross-platform support's also a big attractor for many of the larger enterprise software vendors that use SCons to maintain software on multiple platforms.

Is it hard to remain cross-platform or does Python does most of the heavy lifting?

Leveraging Python does give our cross-platform support a significant headstart, in that we don't have to wrestle directly with a lot of the lower-level implementation portability issues that projects that use C/C++ directly often have to worry about. That having been said, remaining truly cross-platform is a constant struggle.

The most consistent problem area is Windows portability, especially path name manipulation. The underlying Python os.path module only takes you so far because it only abstracts out path name syntax, and you still have semantic issues with how the path names interact with the rest of your code and the outside environment.

We also have a lot of the usual issues with trying to keep up to speed with different toolchains, and behavioral differences on various platforms.

Which advanced features of Python does SCons rely on?

Actually, SCons relies on *no* advanced features of Python, if by "advanced features" we mean things only available in newer versions of the language. SCons is written so that it will work if all you have installed is the ancient Python 1.5.2 (which was still the default Python in Red Hat 7.3, for example). We've started to make use of more modern Python modules (like subprocess) and features (like sets) but only if we have some mechanism, like an emulation layer, to keep the functional code compatible with 1.5.2.

That said, we do use Python's introspective capabilities for certain features, mostly things like inserting layers dynamically to display tracing or debugging information without slowing down the normal code path. Also, our parallel build (-j) support is based entirely on Python's threads, with a simple but effective worker-pool architecture.

Would those be possible in other languages too?

The underlying ideas in SCons -- that is, managing a full-tree build by using a scripting language API to program the dependency graph -- can certainly be implemented in other languages. But since SCons configuration files actually *are* Python scripts, the result wouldn't be SCons, of course. (Backwards compatibility might be a bit of a problem...  :-))

How is SCons tested?

We use a very strict testing methodology, adopted from Peter Miller's Aegis change management system, which requires that every change has at least one new or modified test (by default), and that those tests must not only pass when run against the new, modified code, but must also *fail* when run against the currently checked-in, unmodified code.

We started with this methodology from day one of development, and have over time built up a really strong regression test suite. The tests are every bit as much part of the "product" in that we treat any reported bugs not just as a problem in the SCons code itself, but also as a hole in our test coverage that must be fixed.

We use Buildbot to make sure every change is tested on multiple platforms and against every major version of Python from 1.5.2 to 2.5.

We just had a user report being pretty impressed by experiencing minimal hassles upgrading a complicated code base from using a three-and-a-half year old version of SCons. That speaks well for our development methodology and our emphasis on backwards compatibility.

A flexible system is a system that can be 'abused' for many purposes. Have you seen crazy things being done with SCons?

Sure, but usually within the realm of transforming software source files into target files. People have written some pretty comprehensive code to generate lists of target files dynamically from various arbitrary input. Our Wiki has a pretty extensive set of Builder modules that people have contributed for everything from CORBA to C# to Haskell. We've even had someone write a wrapper in Lua around the underlying SCons build engine.

How is the progress towards version 1.0 going?

Given how much attention we pay to backwards compatibility, a lot of people have suggested that we should just call what we have 1.0 and be done with it. There's a good case to be made for that.

Nevertheless, we've always wanted 1.0 to be really ready in as many details as possibile, mainly so that people who look at SCons for the first time when 1.0 is announced have as positive an experience as we can reasonably make it. The main outstanding issues here are:

  • We're still shaking out issues from a big refactoring of our signature mechanism. That's been in our checkpoint releases since September, and it's very functionally stable at this point, but I'm still looking at some performance ramifications.
  • Our Autotools-like functionality still leaves too much to the individual SCons user. It's kind of like having underlying Autoconf without the higher Automake layer that made things really useful by giving every package the same targets and build behavior, so that everyone can just "./configure; make; make install". Maciej Pasternacki's Google Summer of Code project last year was targeted at this, and we're trying to finish that so we can roll it out before FOSDEM.
  • We'd like a way for people to configure our implicit dependency scanner (used to derive dependencies automatically from #include lines) to more accurately reflect the symbols defined by the C preprocessor.
  • The User's Guide has been lagging all of the features that are present in the man page.

What was your personal reason to start working on SCons?

I started working with Cons, the Perl-based predecessor to SCons, in 1998 when I found that it basically solved the problem of building multiple side-by-side variants in a single dependency graph with minimal work. I got Cons to do what I wanted in one hour of work from a standing start, instead of weeks of monkeying with Makefiles and still not getting them to build variants in any extensible way. I ended up doing the majority of ongoing work on the Cons code base (including contributing an extensive test suite) before that project basically withered on the vine.

Despite the fact that Cons made me a fairly knowledgable Perl hacker, I had always been leery of Perl's esoteric syntax and TMTOWTDI-derived readability issues. The Software Carpentry competition provided a concrete reason to see if I couldn't take the things I knew needed to be improved in the Cons architecture and make them work better in a friendlier language. So I made the switch to Python and have never looked back.

Since then, I've simply found that I really like working on this problem of trying to give people a framework to make it easier to manage really hard software build problems. One of the things I like most about it is that it's so unglamorous -- most programmers would rather be working on "real software" than the internal build infrastructure. So management typically assigns the Makefiles to be maintained by junior people or summer interns, and then we scratch our heads at why the builds are unreliable and we have to "make clean; make" all the time to make sure the dependencies are correct...

Where does the name SCons come from? Especially the 'S' :-)

The real evolution of the name is pretty mundane. Most of the underlying inspiration does come from the old Perl-based Cons tool. The Cons-inspired design that won the Software Carpentry competition in 2000 was actually named SCCons, for "Software Carpentry Cons". In practice, repeating the two 'c' characters when typing "sccons" looked and felt too much like a typographic error, so I dropped one, and the shortened name was retconned to stand for "Software CONStruction."

The notion that "SCons" is short for "Steven's Cons" is purely an ugly rumor...  :-)

Interview: Robin Rowe

Gabrielle Pantera and Robin Rowe will present Tux with Shades, Linux in Hollywood at FOSDEM 2008.

Do you have a main goal for your talk?

To entertain and inform. We expect to have fun!

How does CinePaint differ from plain GIMP, apart from higher fidelity?

CinePaint is software for painting and retouching sequences of high fidelity (more bits per pixel) high dynamic range (can go brighter than a white sheet of paper) images. If you're a film studio or pro photographer you have to choose CinePaint over GIMP because GIMP can't open your high fidelity images without crushing them. High fidelity and HDR images are not normally encountered outside of the film, pro photography and print industries. You won't see these images on the Web because the files are very large and monitors lack the color fidelity to reproduce them. Film has more dynamic range.

Because you asked and since I'm the project leader, I'll go into more detail on CinePaint here. We're not planning for CinePaint to be the focus of our talk at FOSDEM, which is the much broader topic of Linux in the film industry.

The core feature GIMP and CinePaint have in common is the clone brush. That's a tool that copies pixels from one area to another to retouch an image. GIMP and CinePaint, despite outward similarities, are different internally. CinePaint has a high fidelity image core. CinePaint handles 8-bit, 16-bit and 32-bit per channel HDR (high dynamic range) images. GIMP has only an 8-bit core, but has more features and bells-and-whistles. GIMP is typically used on JPEG, PNG, and 8-bit TIFF images. CinePaint is typically used on DPX, EXR, and 16-bit TIFF images (and also supports JPEG, PNG, etc.).

The CinePaint project adopted software that was a forgotten fork of GIMP. That fork was created by some GIMP developers in 1998-1999 with funding from the film industry. I was only slightly aware of it. It wasn't until 2002, while writing an article for Linux Journal, that I noticed Film Gimp in use at a studio. I got a copy of the source code and played with it. When my article came out, readers wrote asking me for the code. Then people started sending me patches. I made the code available to everyone through SourceForge. The GIMP clique became quite upset with me, an outsider, releasing software they thought they'd buried in 2000 when they proposed creating GEGL instead. Some are still angry that I'm giving away free open source software that they wish forgotten.

Can you compare CinePaint's Glasgow architecture to GEGL?

I've never been a member of the GIMP or GEGL projects, so I can only comment on those as an outsider. The GEGL architecture is a node-based image processing library. Its implementation details have changed significantly since the original GEGL proposal in 2000, with each new GEGL technical lead, but the concept remains the same. GEGL is quite different architecturally from GIMP. The GIMP architecture is a tile-based framebuffer with executable plug-ins that communicate over a library-based "wire" protocol to manipulate frames.

The Glasgow architecture is an evolution of the GIMP architecture. Glasgow takes into account several design premises that were not true when GIMP was designed and that go beyond GIMP's mandate:

  1. We care about high fidelity images (more bits per pixel) and high dynamic range images (brighter than white). We accept the more complex core architecture necessary to deal with multiple bit-depth images. HDR images are becoming less exotic all the time because digital pro photography uses them now, too.
  1. We care about exotic features specific to the film industry and pro photographers such as movie flipbooks, HDR, bracketed composite images, 16-bit gallery-quality printing, CMS, Z-depth, and exotic image types such as RAW, CMYK, and XYZ.
  1. We care about maintainability and debugging. The plug-in wire protocol needs to be transparent. Rather than use variargs (which is what GIMP uses to marshal data to plug-ins) we like ASCII strings and direct memory access.
  1. We care a lot about performance. We have bigger images to process. One 2k resolution DPX image is 12MB. A 90-minute film has 130,000 DPX frames, a total of 1.6TB of data. We have more to gain from running faster. One way to go faster is to load more into memory at once. (In the nineties when GIMP was designed, memory was precious and had to be conserved no matter what the performance cost.) Another way to go faster is to make plug-ins DLLs. (GIMP runs each plug-in in its own process space.)
  1. We care about automation and interoperability features such as macro-recording, scripting, networked and shared-memory operations with multiple tools.
  1. We care about renderfarm grid support to perform operations on many images simultaneously in a headless environment, not just one user modifying one image at a time in a GUI.

Sounds pretty impressive... In one of your Linux Journal articles, you described the Linux platform in use at DreamWorks Animation studio. However, from the application software point of view, it appears that more of the software is being kept proprietary. What are the reasons these companies don't embrace open source for their applications as well?

First, studios no longer have to write proprietary Linux tools. DreamWorks Animation was a Linux pioneer. Reasons for studios to build their own tools today are the competitive advantage in having better tools than your competitors or that you simply don't believe others can make the best tools for your needs.

The studios attract the best software designer talent on the planet. The work is very sophisticated. Open source studio application projects don't have the resources to compete with the internal and commercial tools that studios have. More on that is explained in one of the questions below.

Studios have tried to support open source. CinePaint exists because the film industry funded some GIMP development in 1998 and 1999. That GIMP never released what the film industry funded didn't help the open source cause. If CinePaint could have been released as GIMP 2.0 in 1999, things might be different today.

As the popularity of studio Linux demonstrates, Hollywood is a Darwinian system where nothing succeeds like success. There could be a lot of film industry open source application development happening today if it was proven as "better, faster, cheaper" than keeping a hundred expensive Linux application programmers on staff.

Companies often want to protect their custom algorithms and adaptations. Does this sometimes conflict with the licenses of open-source software packages?

Not really. Most studio code is internal secret stuff that's not for release. Where open source is modified only for use by yourself, typical open source licenses don't require you give your changes back.

Studios are reluctant to touch anything GPL. It's hard to justify the legal expense of checking for GPL compliance. LGPL or BSD-licensed is much easier because lawyers don't need to become involved in the software development process.

CinePaint is the only significant instance where studios took an open source GPL program and brought it in house to enhance it and release it back to the open source community. The FLTK GUI library was developed at a studio internally and released LGPL, but that's different because it's their code. They can change the license as they wish.

The one who the GPL disrupts most is me, the open source project leader who adopted unloved GPL/LGPL code. I can't take GIMP GPL code and move it into an LGPL library or vice versa. Where GIMP made bad design choices about whether code is GPL or LGPL, I can't fix that without rewriting the code. When I write open source code I license it BSD.

Are you satisfied with the rate of improvement of open source applications?

Are you kidding? The lack of resources in money and expert developers is totally frustrating. It's almost impossible to get anything done. Because I don't employ CinePaint developers I can't really direct them. Everyone delivers what they want, when they want. Because I have to do other things to earn a living, I can only moonlight on CinePaint as time permits.

DreamWorks Animation employs over a hundred Linux programmers. Not students. Not amateurs. Not moonlighters. A studio can and will put a dozen professionally managed highly paid full-time expert Linux programmers on a project. Almost no open source project can do that.

Can you actually see in some films with which tools they're generated? Do filmmakers include visual or audible Easter eggs?

Have you noticed there have been a lot of penguin movies lately? ;-)

I've heard that sometimes artists sneak tiny penguins into the background of a scene where they don't make any sense as an insider Linux joke, but there isn't much time for pranks. People are working long hours to finish the film to the highest standard they can. The goal is to make the tools transparent, to make sure the audience doesn't look at the movie and think, that's not real.

Interview: Stéphane Magnenat

Stéphane Magnenat will tell us all about Globulation2 at FOSDEM 2008.

What would you like to achieve with your talk at FOSDEM?

I would like to explain why and how Globulation 2 is different from the common trunk of real-time strategy games (RTS). I also hope to present and clarify some key elements of its software architecture. My wish is to welcome new contributors, showing that Globulation 2 is both original and accessible. If this presentation starts an in depth and lasting discussion about open source RTS, I would be equally pleased.

Was there a Globulation 'one' before '2'

Yes. It was our first attempt at reduced micro-management RTS. It was not really fun to play, but has provided us with invaluable experience to start Globulation 2. I will briefly talk about it in the presentation. There is some more information at [1].

Which games inspired Globulation2?

Globulation 1, Settlers, Warcraft, and Caesar.

Globulation2 is a cross-platform game. Is that all handled by SDL, or does it require more effort?

Mostly. Only some minor elements, such as the location of user preferences, are not handled by SDL. One exception is Voice over IP, which is platform dependent, as SDL does not provides abstraction for audio input.

Are you generally satisfied by SDL as a library?

Yes, although it is not perfect. For instance, the sound subsystem is weak. I am also concerned about its future, as its development stopped years ago.

Traditional games are usually released 'when they're done', while Globulation 2 is release incrementally. How is that handled?

Not always very well. A recurrent problem we face is the lack of lasting developers. Really brillant people come, implement great things, and then disappear for a while. They are not to blame, such is life, but it makes monotonic progress towards a pre-defined goal rather difficult. Furthermore, it sometimes results in partially broken releases, such as the 0.9.1, in which LAN games do not work. Yet overall, the situation is getting better.

In summary, Globulation 2 is released when someone has a sufficient motivation to do so. Unexpected broken features excepted, Globulation 2 is rather stable, so we are clearly going into the direction of 1.0 right now. As Aaron Seigo has said some days ago about KDE 4 : "When one perpetually releases alphas/betas a few things happen: people don't test it aggressively enough, third party developers don't get involved, core developers continue doing blue sky development rather than focusing on release qualities.".

Right. So what's missing for a 1.0 release?

We need a stable code base and a balanced gameplay. Furthermore, additional gaming content such as well tuned campaigns and maps would be welcome.

We also have a long list of improvements we would like to add, but we can postpone them all until after 1.0.

You're using SCons as build system, and they're present at FOSDEM as well. What will you tell them if you meet?

Well, I'm not the person who wrote the SCons scripts so I'm not in good position to answer. Personally, I would like a better integration with Debian (or other) packaging tools.

Have you ever experienced game addiction?

No, but I know some people who did, and it is not really nice...

Interview: Patrick Michaud

At FOSDEM 2008, Patrick Michaud will update us about Perl 6.

What do you hope to accomplish by giving this talk?

I'm hoping that the talk will be useful to a wide audience.
For existing Perl programmers, I want them to get a taste of the terrific feature and syntax improvements that Perl 6 has.
For people who program in other languages, I want to provide a glimpse of how Parrot is promoting interoperability among multiple programming languages. And I hope to share the sense of awe and amazement I have at the far-reaching vision that the Perl 6 design team has created.

How would you describe the role of 'pumpking'?

For the Perl 6 compiler, the pumpking role is a mix of lead developer, project manager, recruiter, and sanity check.
Essentially, I see my role as making sure that we are making progress towards a working implementation of Perl 6, and that the project continues to grow and thrive and does not wither away.

Perl 6 development started in 2000. Do think it was a good decision to use this long of an iteration?

Well, it certainly wasn't a conscious decision, but I think that the "do it right" and "it's ready when it's ready" philosophies that are behind Perl 6 development are the correct ones. Certainly the people that have worked on Perl 6 over the years didn't expect things to take as long as they have.
On the other hand, the Perl 6 specification has had some truly radical and far-reaching improvements over the past couple of years, and I fear that had we committed to an early implementation of Perl 6, we might have cut ourselves off from those improvements.

We've been ensured that "Perl 6 will still be Perl". What does that statement mean to you?

To me, programming languages and the people who use them define a culture and a shared model and value system of looking at the world (well, the computing world, anyway). It's the nature of languages, programming or otherwise, to shape the way we view the world and what we can express about it.
Languages also constantly evolve and adapt based on shared experiences and history, and those that are unable to adapt tend to be discarded.

So, I see language as more than syntax and libraries. To that extent, "being Perl" is really more about the fundamental philosophies behind the language -- things like "There's More Than One Way To Do It", the virtues of "laziness, impatience, and hubris", liberally copying the good ideas and memes from other languages, etc.
To me, Perl 6 and previous Perls share that philosophical underpinning, but Perl 6 does it in a way that is clearer, more direct, more expressive, and without many of the false leads and rough edges that have accumulated over time into the previous Perls.

So, Perl 6 is still Perl in that a programmer looking at a Perl 6 program will instantly recognize that it is "Perl", even if some of the details are different. And my experience matches that of others: once I start writing code in Perl 6, I'm reluctant to go back to Perl 5.

You maintain a very successful PHP project as well, PmWiki. Knowing the power of Perl, how does it feel to express code in this other language?

PHP often seems to have a bad reputation among Perl programmers; some of that reputation is deserved, but much isn't. While there have been a few times where I've thought "Gee, it would be a lot easier to do XYZ in Perl instead of PHP", there have also been about as many times where I've been happy that PmWiki is written in PHP instead of Perl.

In a project like PmWiki, I think success really has far more to do with design philosophy and community development than the language used for the software. I chose PHP because I thought it would be a better fit for the community I was targeting, which tended to have a lot of non-programmers in it.
As far as the mechanics of writing code in PHP versus Perl, I never really notice that I'm switching language contexts. When I'm working with PmWiki I just write in terms of PHP sentences, and when working with Perl 6 I think in Perl ones.

So you don't see PmWiki ever becoming a Perl app?

Not in the near future. My experiences tend to agree with the idea that it's often better to migrate/adapt an existing codebase than it is to do a rewrite from scratch. In PmWiki's case, it's not just the core PmWiki code, but also the substantial set of extensions ("Cookbook recipes") that the community has developed and refined over time. Turning PmWiki into a Perl application would probably feel more like developing a new application than migrating an existing one.

However, there's a part of me that fantasizes that I might not actually ever have to make a choice.  :-) One of the fundamental goals of the Parrot project is to provide a common runtime for multiple languages, and a PHP compiler is being actively developed for Parrot.
So, someday PmWiki could be running on Parrot, communicating with modules written in Perl, PHP, Python, Ruby, or whatever happens to be most suitable for the task at hand. And yes, there's a part of me that thinks that ideas like this are far removed from reality, but I've thought the same about several other aspects of Perl 6 and Parrot that are now reality. So, I'll wait and see what the future holds.

Professionally, do you work more on Perl or on PmWiki?

Over time I think it works out to be about equal for each project. I go through some periods where I have more of a Perl focus and others where I focus on PmWiki, but over time they tend to equal out.
However, I just received grants from the Mozilla Foundation and The Perl Foundation for working on Perl 6, so for the next few months I expect to put more focus on that.

What's the current status of Perl 6, its compiler and Parrot?

Currently there are three Perl 6 compilers: Pugs, KindaPerl6, and perl6. Pugs is written in Haskell, is based on somewhat earlier versions of the language specification, and is the most complete of the available compilers. Many of the recent language changes have been due to experience obtained via Pugs. KindaPerl6 is aiming to be a self-hosted implementation of Perl 6.

The perl6 compiler is the one being built for Parrot and that I'm primarily focused on. As of mid-December 2007, we've just gotten a new object subsystem for Parrot, redesigned the compiler toolchain, and converted the perl6 and other compilers to the new toolchain.
One of the benefits of this is that the bulk of the perl6 compiler is now written in Perl 6. We're currently able to run a test harness (written in Perl 6) and pass quite a few basic tests.

Over the next month or two I expect that we will greatly expand our feature coverage, including the ability to create classes and objects, multimethod dispatch, builtin libraries, regular expression matching and grammars, and so on. So, by the time FOSDEM rolls around we should have a lot of Perl 6 implemented in the perl6 compiler.

Interview: Bill Hoffman

Bill Hoffman will give a talk about CMake at FOSDEM 2008.

What do you hope will be the result of your talk?

I hope to gain more visibility for CMake, and to be able to answer any questions people may have, and clear up any misconceptions. Also, I would like to make people aware of the software process that can be achieved with CMake. I also hope to meet other developers and drink some beer!

How did the various sponsors influence CMake's development?

CMake's sponsors have made CMake possible. Starting with National Library of Medicine (NLM), that provided the initial funding for CMake development, and as time went on other customers of Kitware (Sandia National Labs, Los Alamos National Labs and the National Institutes of Health NAMIC project) have provided needed funding for CMake development. Great deals of effort have been spent and continue to be spent on the development of CMake, and without sponsors it would not be possible. So, I would like to thank all of them for their past and continued support we enjoy.

What are the pros and cons of interfacing with native build systems like Make?

The cons are that you can not control everything you would like to. Also, you are subject to bugs in those other build systems. Make is an interesting one, in that it provides very little as it is a simple command line tool. It did allow us to focus on higher level build operations while not having to worry about the lower level details of a make like system. At some point there may be a CBuild that could potentially replace make on some systems.

The more interesting argument on the pro side for interfacing with native build systems are the IDE's like Visual Studio, Xcode, KDevelop, and Eclipse. The big pro is that developers can use the tool they are most efficient with, in a very native way. This allows teams of developers the freedom to choose the build tool that works best for them. At Kitware, there are 40 plus developers collaborating with many other outside groups. There is no single development tool forced on anyone in the company. I use emacs, gnu make and the visual studio compiler. Other developers use Visual Studio projects, and some are not even using the same versions of Visual Studio. Still others at Kitware use Linux or MacOSX. This keeps the developers happy and productive, and avoids forcing people to use things they are not good at.

Being cross-platform and keep being it, is that a difficult goal to reach?

It would be impossible to do without the software development process that we use. By using CMake/CTest/CPack, each night we regression test CMake on every platform that we support. The results are posted to web pages called dashboards. If I break something on a different platform than the one I am developing on, I will usually receive and email within 15 minutes with a link to a page showing me exactly what broke and why.

CMake uses CTest to populate both nightly, continuous, and experimental dashboards for itself. They are publicly available and can be found here.

Each night, the same snapshot of CMake is built on more than 100 computers covering various platforms and configurations. In addition, after almost every commit to the repository, CMake is built and tested on 5 or more machines.

We have a saying "If it's not tested, it doesn't work!". CMake has about 70% code coverage with a large suite of regression tests. When a bug is fixed, we add a test for it.

So the test are covered. What are the other common problems when building large projects?

Performance would be the hardest problem that you face with large projects. The edit/compile/run loop needs to be very fast or developers just can't get things done. Some performance issues just don't show up on smaller projects.

Did CMake benefit from having KDE as a user?

It has benefited greatly from KDE. During the transition many features were added to CMake to support KDE. The installation process became more standard, many performance issues were fixed, progress and color output was added to Make-based builds, and a bunch more features. KDE has also helped to solve the CMake chicken and the egg problem. A project that uses CMake requires CMake to be installed on the machine that is building the software. This requirement has caused some folks to not use CMake. However, if CMake is a standard tool available on systems natively, like the compiler and make, this argument goes away. The use of CMake by KDE has prompted many linux distributions to provide good CMake packages.

How is Kitware doing currently?

Kitware is doing very well, want a job? Kitware is always looking for talented developers to join the team. :) It has been very exciting to watch the company grow from 1 person in 1998 to over 40 in 2008. This March we will be celebrating our 10th year in business.

The impact we have been able to have on the world with our open source tools like CMake, VTK, ITK, ParaView, the Insight Journal and GCC_XML are really amazing given the size of the company. In the end of 2007, we started a computer vision group within Kitware. I think 2008 is going to be a big growth year for the company, and am looking forward to the coming year.

Interview: Kohsuke Kawaguchi

Kohsuke Kawaguchi will give a presentation about Hudson.

What's your goal for this talk?

I hope to get more adoption of Hudson, as well as perhaps solicit more contributions to the project, as I'm always looking for more committers.

Can you describe the benefits of continuous integration?

In my experience in deploying Hudson inside Sun for our group, I think the primary benefit is about moving mundane repetitive work from people to machines. For example, now that Hudson is running tests for our team, we don't need to run through testing before making a commit, which makes people more productive.

The fact that builds and tests are run continuously also improve the turn-around time for regression much shorter, and it also makes it easier to track down regressions to a small set of changes, reducing the effort it takes to find the cause of a regression.

Having a central server to do all these means that there's a single place for people to come to see all kinds of information, and this brings more transparency to projects, which is also a good thing.

Finally, Hudson has a lot of inter-project features, and when your big project consists of a set of smaller teams, these features reduce the communication necessary between teams, as well as keeping the overall project stable.

There are a lot more, but I need to keep some of this for my talk :-)

Of course. How does Hudson test itself? Is that actually easy to do?

Unfortunately, testing Hudson is very hard.

For one thing, testing web apps is hard. You can do some scripted tests, but it won't find broken layouts or typos, for example. Hudson also interfaces in a lot of native applications in different OSes with different set ups (imagine all kinds of different svn connection modes.)
The developer/tester resources that we have are limited, too, as with any open-source project, so our only hope is automated testing. But for all the reasons I mentioned above, it's hard.

Are there other continuous integration systems for Java, and how do they compare?

Yes. Since Java had pioneered this whole field of CI tools, there is plenty of competition. There's CruiseControl, which is still very famous, although I think it shows its age -- for example its design center is not around a web UI, and it doesn't have much inter-project handling at all.

There's also Apache Continuum. It's got the Apache brand, which is very strong, but I don't think it has been very active for the last few years.

Then there's commercial offerings like Atlassian Bamboo and JetBrains TeamCity. They are both from very reputable companies, but both charge a hefty license fee, and more importantly, they are not free as in speech, and as a result lack extensibility and the community.

There are probably a dozen or more smaller players in the field, and I think it's great that Java has such a vital community in this area.

How practical is it to integrate Hudson with existing version/release management systems?

This is one area where the community contributions really excel.
Initially Hudson only had CVS and Subversion support, but since then people from all over the world have developed plugins for Perforce, Accurev, ClearCase, and StarTeam, and I added one for Mercurial myself.
So this is one of the most tried and tested areas of Hudson plugin development, and I think it should be easy for anyone else to join us and write one for their favorite SCMs.

What about issue tracking tools?

Hudson has integration with many different test frameworks. This is particularly easy for Java-based ones, as there's a de-facto report format developed by Ant, and everyone seems to follow that. But internally Hudson is also flexible enough to add different reporting formats, or even different object models for tests. The community has developed a few plugin to bridge Hudson with test frameworks in other languages, like .NET.

There's issue tracker integration as well. Perhaps the most advanced is the JIRA integration plugin, which provides two-way links between JIRA and Hudson. There is a similar plugins for Trac, too. I have more ideas about better integration with the issue tracker, and so I'm hoping to find some time to work on it.

Flexibility and extensibility usually allows a system to be adapted to different needs. Which applications of Hudson have surprised you personally?

When I design an extensibility point, I usually have some idea about how it could be used, so in that sense what people are doing with it is usually within what I had imagined.

Perhaps one real pleasant surprise is a contribution from JBoss that implements a feature of "pushing" build records from one Hudson to another, so that you can run builds inside a firewall but publish a result outside. I tried to think about how I'd have done it by myself, and couldn't come up with a good approach, so it was really amazing for them to pull this off.

The name Hudson, is it inspired by the Hudson river?

I think of this program as my personal butler or secretary -- someone who's very organized and handles my administrative works. Since we the lowly engineers don't get one unlike managers, I started writing one. So I wanted a name that sounds like a butler, and since Jeeves was already used, I took Hudson.

We're looking forward to meet him!

Announcing 2008 Keynote and Main Track Speakers

The list of Main Track speakers for FOSDEM 2008 is almost complete and officially announced today, even though the website doesn't contain all the speaker bios and abstracts yet.

The keynotes will be highly interesting and entertaining, as always:

Keeping our tradition of high quality technical talks, the main tracks for 2008 will be organized around 6 topics and feature a wealth of project leads and core developers from all around the FOSS horizon:

Syndicate content