Speakers
FOSDEM 2009 Speaker Interviews
We're a bit later than last year to announce it, but just like previous editions we have collected a list of interesting interviews with our main track speakers.
To get up to speed with the various topics discussed in the main track talks, you can start with the following articles:
Interview: H. Peter Anvin
H. Peter Anvin will give a talk about Syslinux and the dynamic x86 boot process at FOSDEM 2009.
Could you briefly introduce yourself?
Hello, I'm H. Peter Anvin. Back in 1992 I was in college, and had finally scraped together enough money to buy my dream computer - what was then a super-fast 486/33 with 8 MB RAM. Of course, the thought to use this monster to run DOS or Windows 3 was downright revolting, so I thought I'd end up running OS/2 on it. However, while waiting for OS/2 or 386BSD to come out, I heard about this toy Unix clone some college student in Finland had cobbled together, and the rest, as they say is history.
I recently became one of the co-maintainers of the x86 architecture in the Linux kernel, and work for Intel's Open Source Technology Center.
What will your talk be about, exactly?
I'm going to talk about Syslinux, a bootloader I originally wrote on an overnight hacking binge in 1994 after a particularly frustrating experience installing SLS 2.0, one of the very first Linux distributions on CD-ROM. Back then, PCs couldn't boot from CDs, and so you needed a floppy with the kernel. Well, the floppy that came with SLS didn't support my SCSI card, and so I had to rebuild the kernel on it with the right drivers. However, getting the right kernel on that floppy was an excrutiating experience, and so I resolved to write a bootloader that could boot off a FAT filesystem so that one could manipulate it by just copying files around using any operating system.
Since then Syslinux has grown into a suite of bootloaders supporting a large variety of media, including PXE networking, CD-ROMs, and hard disks. It seems to be popular in applications where ease and flexibility of configuration is paramount.
What do you hope to accomplish by giving this talk? What do you expect?
I am hoping to attract both users and developers to the Syslinux project. On the user side, there are a lot of new features in Syslinux which don't seem to be widely known, in particular its flexible module API. On the developer side Syslinux is getting to be too big for me to maintain as a sole developer's side project. Fortunately, over the past few years there has been a whole new influx of active developers; I am hoping to get to the point where I mostly maintain the core infrastructure and have new user features implemented by other people.
Which new features can we expect in the near future in the Syslinux project?
The most dramatic new feature coming up is a scripting engine based on the Lua language.
Will we see other components in the future, next to the current SYSLINUX, PXELINUX, etcetera?
I definitely hope so. The next thing to be supported is probably going to be either btrfs or NTFS; both have gotten quite a few requests. I am also looking at ELF support, which unfortunately will require a fairly radical restructuring of the code.
How does Syslinux compare to GRUB as a bootloader, e.g. with respect to file systems?
Syslinux doesn't support as many filesystems as GRUB, mostly because the Syslinux filesystem support is written in assembly language. This is a historic accident back from when Syslinux had to fit on a floppy together with the entire kernel and initrd system. There is work underway to remedy this, at which point adding more filesystems to Syslinux should be a lot simpler.
The biggest difference between GRUB and Syslinux is probably a matter of trading off features for reliability. GRUB has largely taken the approach that one should be able to do anything that is even remotely possible; for example you can use a GRUB installation on disk 0 to boot a kernel stored in a different filesystem on disk 1. Syslinux is more conservative in that way, mostly because it was designed from the beginning to be a bootloader for removable media, which implies a dynamic context. As such, I have chosen to play by a stricter set of rules, which means tighter constraints on the user but, hopefully, easier configuration and a boot system which will work even if the system changes underneath in radical ways.
This interview is licensed under a Creative Commons Attribution 2.0 Belgium License.
Interview: Mark Surman
Mark Surman will give a talk about freedom, openness and participation at FOSDEM 2009.
Could you briefly introduce yourself?
Hmmm. What to say? I've been playing around with the ideas of freedom and openness in a serious way for about 10 years. That's what makes me tick. First it was running a small open source content management system project (google: 'ActionApps'). Then it was promoting open, participatory technologies to non-profit organizations. And most recently I worked at Mark Shuttleworth's foundation applying open source thinking to the re-invention of education. Now I am at Mozilla as executive director. I just arrived in September.
Of course, that's the professional me. I recently did a blog post as part of the 7 Things meme running around Mozilla. That might give you a better idea of who I am.
What will your talk be about, exactly?
My plan is to talk about two very simple things -- the incredible success that free and open source software have had, and the fact that we're going to need to think much more broadly and creatively if we want to drive this success into the future, or even preserve the gains we've already made.
What do you hope to accomplish by giving this talk? What do you expect?
I want to celebrate a little. The people who come to FOSDEM really have changed the world, and in a big way. Just think about 2008. Linux landed on the desktops of millions of mainstream netbook users. Firefox passed the 200 million user mark. Free software started to create major cracks in the mobile platform space. The ideas and technologies that have grown from places like FOSDEM are are having a massive impact out there in the world. This is something to be proud about, even optimistic.
But I also want to get people asking: what do we need to do if we really want that freedom and openness to play a central role the societies we live in 50 years from now?
Personally, I believe we need be very broad and creative when asking this question, not just looking at software. Take mobile as one example: progress is being made on the software side, but hardware, spectrum, regulations and everything down to the contracts people sign when they get a phone number are incredibly closed. We need to be asking questions like: what action can we take to make this whole mobile ecosystem free and open? Of course, mobile is just one example. We need to ask questions like these in web services, online content and so on.
Many people don't know that the Mozilla project is more than creating an open source web browser: it really is about keeping the web open. Now that the Mozilla Firefox browser has been become so successful, will the Mozilla project focus on the more general aspects of the open web mission?
Yes, that's right. When Mozilla Foundation was set up, it defined its mission as 'guarding the open nature of the internet'. It says this right in the original incorporation charter. The Mozilla Manifesto goes even further, committing to promoting the continued innovation and opportunity on the internet commons, and calling others to do the same.
By taking 20% of the browser market, Firefox has made huge strides towards this mission, making user choice, standards and security mainstream. Mainstream not just for the people who use Firefox, but also other browswers as we've made these values things that the market has to pay attention to. This helps keep the internet open is some pretty tangible ways, and help deflect us from the closed world that was emerging with single vendor dominance.
Mozilla will certainly continue to use Firefox as a central tool to drive our open internet mission. This is critical. But we're also asking ourselves: what does it mean to be an organization that guards the open nature of the internet for the next 50 or 100 years? While we don't have the answer yet, this is clearly about more than just Firefox. It's also about more than just Mozilla. Much more.
What specific things will Mozilla do to spread their open web mission? Of course you can tap into the success of the Firefox browser, but how will you deal with the majority of the users who only use Firefox for pragmatic reasons and are thus not interested in the philosophy?
Well, the first answer is that Mozilla will continue to work on ways to make the open web a reality for people whether or not they care about the philosophy. If you look at Mozilla's 2010 goals, you'll see that we focus alot on things like creating a unified open web in mobile and promoting security and autonomy in online data. These are things that matter to every one of the billion people on the internet. We'll focus on making progress that helps all of these people, whether they know about the importance of the open web or not.
But, I also think there is a huge group of people who do care about the values of freedom and openness, although they may not say it that way. Just think of everyone creating a mash up on YouTube or maintaining an article on Wikipedia or even writing a blog. Tens of millions of people are doing things like this, and they can only really do these things because of the ideas and technologies that have come from the free and open source world. Personally, I think we need to be talking about the open internet more clearly and loudly to these people. What would happen if millions of people suddenly saw themselves as part of building a very special, important thing called the open web? I don't know the answer, but it would be something good, I suspect.
What do you want to accomplish in 2009 as executive director of the Mozilla Foundation?
Mozilla as a whole has some pretty ambitious goals the next year or two, which range from promoting the idea of participation on the internet to working with others to open up the mobile web. We'll also release new versions of Firefox (3.1) and Thunderbird (3.0).
The Foundation team itself is pretty small, and our goals are a little more modest. Certainly, we want to find ways to better support the whole of the Mozlla Project, which is more than just Firefox and more than just the formal organizations we've set up. We also want to experiment a like with new programs in areas like education that help us see promote our mission through activities beyond producing software.
As we do all this, we're also asking what it means to stand up for open web in 50 or 100 years. That's a conversation we want to be having constantly over the coming year.
As a Mozilla person and open web evangelist, how do you look at the recent attention for 'rich internet applications' built on Adobe Flex, Microsoft Silverlight and Sun JavaFX?
Mostly, I am not worried about these things. We already have a great tool for building rich internet applications: it's called the open web. Just look around at the most widely used applications on the internet. These applications are built using AJAX and other open technologies. I think the open web can keep things like Silverlight at bay.
That said, there is alot of critical work that needs to happen here. We need to talk more loudly about the open web as the right answer to creating rich internet applications. We need to push open video technologies like Theora into the web mainstream. And maybe we we even need create tools that make it easier to develop application oriented sites using open web technologies. Mozilla and others need to be pushing in these areas. In many cases, it's already happening (e.g. Theora in Firefox).
With more and more people accessing the internet on a mobile phone, what are Mozilla's plans on the mobile platform?
As mentioned above, one of our major goals for the next couple of years is to make sure that there is one unified mobile web. Practically, that means creating a world where people develop just for the web using open web technologies, and that what they develop works well on whatever device they are using.
Open source software and standards based mobile browsers that people actually want to use will be a big part of this. Lots of people are already trying to make sure these things emerge. Mozilla is stepping into do its part with a mobile version of Firefox currently codenamed Fennec. It's in alpha right now running on the Nokia N810 Linux platform. It should be out in beta in the next few months, and then also porting to other platforms.
But really making the mobile space open will take alot more than the right software. There is a hardware piece. A carrier piece. A regulator piece. My personal feeling is that Mozilla and others need to be looking hard at how to create a fully open mobile ecosystem. I don't know how you do this, but it feels pretty important.
Your cv mentions that your previous job was 'Open Philanthropy Fellow' at the Shuttleworth Foundation. What is open philantropy? And you're even member of a group called Open Everything. Can the open source philosophy really be generalized to other aspects in our society?
It's pretty clear to me that the ideas and practices behind open source can be useful in many other parts of society. Probably not all, but many. Things like Wikipedia and the huge collection of Creative Commons pictures on Flickr prove this. These are real and mainstream parts of society that are built on many of the same principles as free and open source software. We're now seeing this thing starting to happen with educational content, academic publishing, science and many other endeavours. These things are still in early days, but it's happening.
This interview is licensed under a Creative Commons Attribution 2.0 Belgium License.
Interview: Scott James Remnant
Scott James Remnant will give a talk about Upstart at FOSDEM 2009.
Could you briefly introduce yourself?
Well, my name is Scott James Remnant. I work for Canonical Ltd, the commercial sponsors of Ubuntu; and my job is a lead developer for the Ubuntu Foundations team.
My particular area is the boot process and the plumbing layer. I care for those pieces of infrastructure between the kernel and the X server. Notably this includes the init system and udev.
I've been involved with Ubuntu since the very beginning, and was one of the Debian developers that Mark hired when originally forming Canonical. I've done a number of different jobs at the company, including finding my way into, and back out of, management.
Previously to working on Ubuntu, I was a member of the Debian project where I maintained dpkg and Libtool - the latter, I did enough work on to become one of the upstream maintainers as well. I've also written a couple of small projects such as the popular Planet aggregator and dircproxy IRC bouncer.
My first real contribution to the Linux community was running a humour site called Segfault.org, but I think that's been forgotten in the mists of time ;-)
What will your talk be about, exactly?
I'll be talking a little about the history of Upstart and the problems with the current releases, and then will introduce the planned Upstart 1.0 and talk a little about the changes that are going to happen and the roadmap to get there.
What do you hope to accomplish by giving this talk ? What do you expect?
I'm hoping to answer many people's questions about what's next for Upstart (of which I spy a few below :p), and what I'm planning to do with it.
Hopefully I'll get a lot of feedback as a result, and will know what people think will work and what people think won't. Maybe I'll even get ideas for new features or better ways of doing things.
Also I'm hoping simply to educate.
What did you do as Ubuntu Development Manager?
Canonical originally had only a single team working on Ubuntu, which grew to fifteen people reporting to one person (Matt Zimmerman, the CTO). We realised that it just wasn't scaling, and had to split the team into two.
I successfully interviewed to manage one of the resulting two teams, and thus roughly half of the Canonical Ubuntu developers reported to me. My job was pretty much the same as any other team lead or manager job at any other company.
I managed the day-to-day work of the team, planned work for each six-monthly cycle, handled performance reviews and so on.
Canonical didn't stop growing, and we quickly added further teams. Along the way we rebranded my team to be the Ubuntu Desktop team, and thus my job title was actually Ubuntu Desktop Team Manager for most of the time.
However I stepped down last year, wanting to return to full time development work.
What were your reasons to start developing the Upstart system? Which problems is it trying to solve?
A number of different reasons really.
Firstly the simple realisation that one of the most core pieces of software in Linux was the least understood and maintained. Nobody actually uses sysvinit's features, and instead works around them with "init scripts" and other similar things.
Secondly work on hotplug and then udev, and increasing support in the kernel for userspace being able to react to changes, made me realise just how adaptive the boot sequence really needed to be.
That work also led to a lot of race conditions, and a lot of busy loops; being able to eliminate those was a big reason.
Why did you decide that Upstart should be backwards compatible with SysV init?
That was easy ;)
There's a long running joke that the reason Ubuntu works as well as it does is the side-effect of the battle between Colin Watson and I.
Colin firmly believes that all changes should be made in as small a steps as possible, always preserving compatibility both backwards and sideways, and that patching and improving an existing system is better than writing a new one.
I firmly believe that sometimes you've just got to ditch the past and start over from scratch. (The standard library inside Upstart is called libnih for a reason :p) To steal a phrase from a favourite author of mine, I am to backwards compatibility what King Herod was to the Bethlehem Playgroup Association.
Since the right course of action is always somewhere inbetween these two extreme points of view, Ubuntu steers well down it.
The only way I was ever going to get Colin to agree that writing a new init system was a good idea was by promising to make it backwards compatible with the old one.
And as ever, that compromise between us works out absolutely for the best.
It's easy for any distribution to adopt Upstart, they only need to throw away the old /etc/inittab file, which hardly anybody uses anyway. All the old init scripts still work just as before.
It's also meant that we don't need any kind of Upstart flag day, since init scripts can be supported forever, we don't need to convert things over to Upstart jobs in a rush.
Instead we've been able to get the system right first!
When will all services in Ubuntu have their SysV init scripts converted to Upstart? What are the difficulties in converting them?
The difficulty has been that Upstart's method of describing when they should be run is not exactly the kindest thing, and gets really complicated for trivial use cases.
It exposes too many internals, basically.
One of the big changes I'll be presenting is the solution to this problem, which will hopefully begin the avalanche of jobs moving over.
Another side reason has been the desire to have at least one other distribution using Upstart; now that Fedora ships with it by default, we can actually try to standardise on job definitions. My hope is that an upstream should feel confident shipping an Upstart job in their own releases, and expect it to just work on all distributions.
Which new features can we expect in Upstart and when?
You'll have to come see my talk for that one ;-)
How will Upstart replace cron and atd?
A goal is to try and centralise all service management facilities in one place. Now, cron might seem a little unrelated to init scripts, but if you actually look at what tends to be in /etc/cron.* they're not as distant as you might think.
For example, on my system here, sysklogd has a cron.daily script; this calls savelog and then actually calls the sysklogd init script to restart the logging daemon.
I believe that such things should just be a part of the sysklogd service definition. You should be able to see that sysklogd has a "rotate-logs" action defined, and run that manually if you want. Since that also requires automatic activation, the init daemon would have to support the notion of such things as "daily".
Once you do that, it's not a big push just to merge the whole thing into init anyway. After all, did you know that cron supports @reboot for user jobs?
Obviously we'd extend it as well, you wouldn't just be limited to standard time definitions, but times related to other events as well. Who has never wanted to define a service as being run from "45s after startup" or similar?
Merging cron's other features has benefits too. cron mails you the output of its jobs, why doesn't init? If apache fails, having the errors mailed to you would be rather handy, don't you think?
So I think they're natural fits.
atd is just a specialised version of crond, from this point of view.
inetd is another question people ask, and that's a much more difficult answer ;) Apple's launchd integrates these two _very_ well.
You were a Debian developer for years. What made you decide to become a Ubuntu developer?
Ubuntu is more fun.
I could never have done something like Upstart in Debian, and certainly couldn't look after something like the boot sequence!
For example, this week in Ubuntu we switched to using /lib/udev/rules.d for the install location of udev rules (matching upstream). Such an action in Debian would require explaining the reasoning, and probably arguing, with dozens of different maintainers - and maybe even not getting agreement.
In Ubuntu, I just did the ~80 uploads myself.
Likewise for simple changes to boot reordering, in Debian, that could be a complete nightmare. In Ubuntu, I have the freedom to do it all myself (you can still persuade others too, of course ;p)
And I really didn't like the way that Debian reacted to Ubuntu. I had thought that it would embrace its offspring, and see that Ubuntu was attempting to reach a completely different individual to what Debian had.
I even thought that Debian might not mind that Ubuntu patched software to make it work differently, especially together.
After all, how many patches does Debian carry against its upstreams where they've failed to get agreement with the maintainer - or are just dealing with a different policy?
The whole ethos of Open Source is that anyone can take your work, and build something different or better out of it. And that you can then see what they've done, and if you like, integrate it back in.
Debian's reaction baffles me to this day.
Compare it to Ubuntu's embracing of all of the different derivatives of it, even those that change fundamental things.
It was the Debian event when somebody turned up in a "Fuck Ubuntu" t-shirt that made me decide to leave.
I realised there were too many people in Debian who believed in Debian more than they believed in free software; and too many people who were so religious about "their package" they they didn't want to see the bigger picture.
This interview is licensed under a Creative Commons Attribution 2.0 Belgium License.
Interview: Lenz Grimmer
Lenz Grimmer will give a talk about MySQL High Availability Solutions at FOSDEM 2009.
Could you briefly introduce yourself?
I live in Hamburg, Germany, where I work for the Database Group at Sun Microsystems (which was formed after MySQL was acquired by Sun almost a year ago). My current job title is "MySQL Community Relations Manager EMEA", but I joined MySQL as a Release Engineer in 2002. I inherited my first tasks directly from Monty himself: building, publishing and announcing the MySQL server releases, improving the build environment and working with the Linux distributions that ship MySQL as part of their products.
Prior to joining MySQL, I worked as a distribution developer and package maintainer at SuSE Linux in Nuremberg, Germany. I've been using Linux and Open Source Software as my main desktop operating system since 1996.
I blog on www.lenzg.org and contribute and participate in various OSS projects and activities. I also maintain a small project by myself - mylvmbackup, a tool to perform MySQL backups using file system snapshots. Additionally, I maintain a number of RPM packages on the openSUSE Build Service.
On the personal side I am married (and my wife uses Linux, too!) and have one daughter, who is 2.5 years old now. In my spare time I enjoy reading, watching movies, hacking on random stuff and gaming. I recently started running, to get some more physical exercise.
What will your talk be about, exactly?
My talk "MySQL High Availability Solutions" is an attempt to provide the audience with an overview and introduction to the tools and techniques that can be used to make a MySQL Server setup highly available. I will be talking about different technologies and tools that can be used, with a focus on Open Source solutions. I will also cover MySQL Cluster, how it relates to the MySQL Server and what features and limitations it provides. In my presentation I will talk about established "best practices", solutions that have proven to be useful and reliable for a large number of our users.
What do you hope to accomplish by giving this talk ? What do you expect?
I hope I can encourage users to implement mission-critical high-availability solutions using MySQL with confidence, and to inspire them to improve the availability of their existing MySQL installations. I'd like to provide some insights into what works well for other users and what our consultants recommend to customers. I also hope to make MySQL Cluster more popular.
I don't really know what to expect, but I am looking forward to good feedback and a great audience!
What does your job as MySQL Community Relations Manager at Sun looks like?
Establishing and maintaining a good relation to the community has always had a very high priority at MySQL. We are in charge of maintaining and improving the community infrastructure like Planet MySQL, the Mailing lists and Forums and the MySQL Forge and Developer Zone. In addition to that, we attend events and conferences (like FOSDEM), to talk about MySQL and related products and to keep in touch with the community.
We perform a lot of communication with key community individuals and we disseminate news and information to the public, e.g. by blogging or writing articles. But we also do gather and collect "business intelligence" that we feed back into the organization, to make sure we don't lose track.
Do you see already benefits of the acquisition of MySQL by Sun? Does MySQL get additional support or resources now? How is the technical side influenced by Sun?
I personally am very happy about the change. Sun is a very active contributor to many Open Source Projects and the overall work atmosphere and spirit is very compatible to the MySQL culture. We immediately felt at home and were very warmly welcomed. It did not really feel like an acquisition at all!
Of course there are some pains and many things work differently in large corporations. So for some of us it has been a bit of a culture shock. Lots of new processes and rules to learn and follow, some of them way more complex than they used to be before.
But in some ways Sun still feels like a startup company, I really enjoy the attitude and enthusiasm of the new colleagues I am working with now.
Sun is putting an enormous amount of resources behind MySQL, so we are finally able to tackle many projects and improvements that we previously were not able to work on, due to resource constraints. And it's really amazing to be able to work with all these bright people from across different teams! We receive a lot of helpful input and advice that will allow us to improve both our products and our established processes. But overall, Sun is very careful to not completely disrupt or overturn our organization - most of the teams are still intact and our work environment has not changed dramatically.
Knowing that Sun is a Global Partner of Oracle, will this affect MySQL?
No, not really. MySQL had an ongoing partnership with Oracle before the acquisition: Oracle maintains and develops the InnoDB storage engine, which is a key component of the MySQL Server. And MySQL's primary goal is not to compete with or replace Oracle, we fill a niche where Oracle simply is not the best fit.
What options does MySQL offer for organizations that want a high availability solution? Why should they choose MySQL High Availability Solutions?
There really isn't one single "MySQL HA solution" - it depends on the user's workload and particular environment. But here is an oversimplified answer: For three nines, use MySQL Replication. For four nines, use Heartbeat and DRBD from Linbit. For five nines, go with MySQL Cluster. And the nines here are of course the availability rate (99,9%, 99,99% and 99,999%). But for more details, do come to my presentation!
The nice part about these solutions is that they are usually pretty simple to set up and maintain and have been put into productive use by many people already. And they provide enough functionality at low cost with high flexibility (due to their Open Source Nature).
This interview is licensed under a Creative Commons Attribution 2.0 Belgium License.
Interview: Brion Vibber
Brion Vibber will give a talk about MediaWiki at FOSDEM 2009.
What does the 'Chief Technical Officer' of the Wikimedia Foundation do?
A little programming, a lot of code review, some capacity planning, and more paperwork than I used to do as a volunteer programmer!
Basically I oversee our staff, contract, and volunteer developers and sysadmins working on MediaWiki and Wikimedia's tech infrastructure. Since it's not a huge team, this sometimes means poking at code myself, but I'm mainly sticking to patch review, integration testing, and prioritizing new tasks for the team to attack.
I also get to do the fun stuff like hiring new folks and adding up our budget to make sure we can afford those servers to run Wikipedia!
And what do you do on 'Brion Vibber Day'?
"On Brion Vibber day, Wikipedians everywhere greet each other in Esperanto."
I originally got into MediaWiki development through the localization end -- such as adding Unicode support -- after discovering the Esperanto edition of Wikipedia. If you like learning languages, it's wacky fun. :D
Can you give us an impression of the scaling architecture of Wikipedia?
Basically we've got several of the levels of the LAMP stack fanned out:
- Geographically distributed HTTP caches (Amsterdam and Seoul)
- Local HTTP caches (Florida)
- Apache+PHP machines hold the actual MediaWiki web app -- PHP scales out nice and linearly for request processing! It is though necessary to do some manual coding to properly send HTTP caching headers (for caching on the output end) and handling internal caching and database sharding/balancing...
- Memcached holds lots of pre-rendered data, stashed in memory to avoid duplicate processing.
- MySQL: Databases are sharded between sites, split between data with different access patterns, and replicated for failover and load balancing.
How do you balance Wikipedia's needs versus the needs of other MediaWiki users?
Wikipedia always comes first. ;)
Actually, MediaWiki is usually pretty expandable; as long as it's a fairly clean break, many things can get done as extensions or optional features. Sometimes we actually put in cool features for third-party users that we have to disable on our own sites because they don't scale enough... *sniff*
Where is MediaWiki heading to? Will it become more 'semantic web' buzzword-proof?
Our big push this year is going to be on usability -- cleaning up things in the user interface and workflow that make it hard for folks to just get in and do what they mean to do.
The 'semantic web' is fundamentally about making it easier to get "meta"-things done by throwing machine-readable data around from one part of the web to another. The hard part traditionally is figuring out how to get people to make the data in the first place without it being horribly difficult... certainly this is something we're interested in. :)
Are any changes planned in the MediaWiki wiki-syntax?
For compatibility reasons, probably not much. But we are hoping to introduce more syntax-aware editing features which can help smooth out the editing experience.
This interview is licensed under a Creative Commons Attribution 2.0 Belgium License.
Interview: Leslie Hawthorn
Leslie Hawthorn will give a talk about Google Summer of Code at FOSDEM 2009.
Could you briefly introduce yourself?
I'm a Program Manager for Google's Open Source Team, managing the communities for our two student programs, Summer of Code and the Highly Open Participation Contest. I've been with Google for more than five years now and am a true child of Silicon Valley, having been born and raised there and having worked in high tech since graduating from University. I'm a English Literature geek by training and an FLOSS geek by choice. I'm an avid reader, particularly of Science Fiction and Fantasy, and a huge fan of James Bond, both the novels and the films.
What will your talk be about, exactly?
My talk will be a behind the scenes look at how the Summer of Code program is organized and run.
What do you hope to accomplish by giving this talk? What do you expect?
I'm hoping to provide the audience with a better understanding of community management at large scales and to give folks some ideas of how to do their own initiatives to effectively bring in newcomers. Additionally, I'll be sharing quite a few fun stories from the program, so I'm hoping folks will leave amused, empowered and inspired.
Managing the Google Summer of Code program seems like a really big task. Can you explain in a few sentences what your work for it looks like? And what are your other tasks in the Open Source Programs Office at Google?
My job involves a great deal of cross-functional project management, including interfacing with our Finance, Legal and Public Relations teams. I'm responsible for setting up all aspects of the program and keeping an eye on everyone's progress, providing guidance when needed. I spend a lot of time helping newbies - both mentors and student contributors - feel more confident when approaching problems and helping people communicate more effectively.
When I'm not focused on Summer of Code, I manage the Highly Open Participation Contest, coordinate development of Melange, and spend a good deal of time writing for the Google Open Source Blog. I also facilitate a number of large community conferences - last year we held more than a dozen at the Googleplex in Mountain View, California, USA - each year as part of our efforts to give back to the community.
With four years Summer of Code, Google has provided over 10 million US dollars in funding to open source projects, generating over 6 million lines of code. Why does Google do this? What are the advantages for Google?
It's easiest to think of the program as a Research and Development partnership with the FLOSS community. A strong FLOSS development ecosystem is essential to Google's business - it's no secret that we use a lot of FLOSS code - and Summer of Code is an excellent way to ensure an influx of new blood and creative ideas in the FLOSS arena. We also see the program as investment in the future of Computer Science by helping future innovators gain skills much more quickly through participation in real world development scenarios.
We use some of the source code developed by students through the Summer of Code, but that's not the primary motivation for the program. Google gets much the same benefits as the rest of the world: more source code available for everyone's benefit.
Which finished GSoC projects do you consider the biggest success stories?
There are so many great stories. I don't want to touch on just one here - we've seen more than 2500 students successfully complete the program and they are all great successes in their own right. I'm also a big believer that our students and mentors often learn more from their failures than their successes.
How many participating students has Google recruited?
Very few. Less than 2% of our students and mentors have ever interviewed with the company, and even fewer have accepted an offer of employment.
Why was the Google Highly Open Participation Contest for high school students initiated? Was it a success and will it be organized each year like GSoC?
We were hoping to take the general Summer of Code model and use it to engage even younger students in FLOSS development. Rather than just focusing on code, GHOP students were also invited to do User Experience Research, write documentation, create marketing materials, etc. By providing all these additional avenues of participation, we hoped to expose students who would possibly never even hear about the concept of Open Source software learn more about it and how they can be involved.
The contest was a great success - more than 350 students worldwide completed over 1,000 tasks to help out 10 FLOSS projects - and I was particularly excited that we were able to help so many different kinds of new contributors get introduced to FLOSS. We even saw two of our mentoring organizations, Drupal and Joomla!, create their own GHOP like programs for community contributions, also with great success. I can't say that GHOP will be an annual offering, but we are definitely planning to reprise the contest again this year.
How did you get the nickname 'Google's geek herder'?
A gentleman asked me what I do for Google during one of the conferences I was hosting - IIRC it was MySQL Camp back in 2006 - and I responded with, "You're looking at it; I herd geeks professionally." It was meant to be a funny quip but the nickname sort of stuck with me after that. Besides, it is much more succinct than Open Source Den Mother.
This interview is licensed under a Creative Commons Attribution 2.0 Belgium License.
Interview: Max Spevack
Max Spevack will give a talk about the Fedora Project at FOSDEM 2009.
Could you briefly introduce yourself? How does your day look like as a community manager for Red Hat?
I'm 29 years old. My university degree is in Computer Science. I've been working at Red Hat for about 4 1/2 years. From February 2006 - February 2008, I was the Fedora Project Leader, meaning that I was ultimately accountable for everything that happened in Fedora. Last February, I transitioned out of that role and together with Greg DeKoenigsberg formed the Community Architecture team within Red Hat. The purpose of the Community Architecture team is to ensure that Red Hat is the best possible citizen in the open source world that it can be, that the community building lessons that have made Fedora successful continue to be honed, and that other parts of Red Hat's business properly engage with open source communities as well. Fedora may be the best example of Red Hat working with the communities, but it shouldn't be the only one. Look at OLPC for example, and the work that Red Hat continues to do in that community.
What will your talk be about, exactly?
I'm planning on taking a look at Fedora 10 and the currently-in-development Fedora 11, examining some of the parts of those releases that are the most interesting, and talk about those features from the perspective of how they are developed in an open, community-friendly way.
What do you hope to accomplish by giving this talk ? What do you expect?
I'm hopeful that the talk will show people the places where Fedora is being an innovative leader in open source work, and also encourage people to participate in the Fedora community, once they see how easy it is to make an impact, and the large number of opportunities that exist. I'm also hoping to show people that the Fedora community is a wonderful place to be if you are an open source developer, and to clearly demonstrate some of the innovations that have come out of Fedora in the past year or so.
The Fedora project has attended previous FOSDEM events and you were a speaker in 2007. How do you look back at it? Did you get a lot of feedback? What are the Fedora project's reasons to be at an event such as FOSDEM?
I was at FOSDEM in 2007, but not in 2008. However, the Fedora Project has been at FOSDEM for several years now. We really like this show, because it has a very developer and community feel to it. In the European event calendar, FOSDEM is one of the focus points that we plan around.
Can Fedora be valuable in an enterprise environment? In which corporate circumstances is it maybe better suitable than Red Hat Enterprise Linux and CentOS?
I think that when people look at the distributions that are in the Red Hat "family", they need to realize that the purpose of Fedora and the purpose of Red Hat Enterprise Linux are quite different from one another. Fedora's mission is to deliver its users the absolute best of what exists in the open source world today while Red Hat Enterprise Linux is meant to snapshot that best-available technology at a given snapshot in time and then guarantee to its subscribers maintenance and support for seven years.
I've heard of a few cases where an "enterprise" wants to be using Linux on the desktop, and wants to have the absolute latest GNOME, so they use Fedora. That's a perfectly valid use case for Fedora. But when people want the support guarantees and long lifecycles, they need to go with an enterprise distribution.
Does Fedora in your opinion play a broader role than being the upstream distribution of Red Hat Enterprise Linux?
Sure. I think I touched on it above, but Fedora is obviously a standalone distro in its own right. It just so happens to be the upstream of Red Hat Enterprise Linux -- and that is very important to Red Hat -- but there are millions of computers across the world that are using Fedora as an independent distro, and that is of course a wonderful thing.
With the majority of the Fedora packages maintained by community developers, does this generate conflicts between Red Hat and community people, for example when deciding on what features to implement in the next Fedora version?
So far, there haven't been any conflicts. John Poelstra manages the Fedora feature process, and he has done a wonderful job of implementing a transparent and consistent set of guidelines for getting a particular feature into Fedora. Whether the feature owner is a Red Hat engineer or a student in a dorm room, the process is the same. If you follow the process and the work is of proper quality, it gets in. If you don't, then you have to wait until the next release (which is only 6 months away).
What do you consider the biggest Fedora success stories? And looking back at the evolution of Fedora, what where the biggest breakthroughs?
When I started as the Fedora Project Leader in February of 2006, I had a few specific goals in mind that I wanted to achieve, which I felt would put Fedora on a successful course for the long-term future.
The first goal was to merge Fedora Core and Extras into one repository, which followed the Fedora Extras model (because that model had proved itself to be the right one).
The second was to get Fedora's infrastructure sorted out -- this involved hiring Mike McGrath as the Fedora Infrastructure Leader and getting Koji (the build system that turns source RPMs into binary RPMs) and Pungi (the compose tool that takes a bunch of binary RPMs and creates a distro) written. Jesse Keating (Fedora's Release Engineer) and Dennis Gilmore deserve a lot of credit for those tools. These tools also allowed for the idea of Fedora respins and Fedora remixes to take form, which Jeroen van Meeuwen and the "Fedora Unity" team had been working on for quite a while also.
Finally, I wanted to get Fedora's "Live" technology up to par (and surpassing) other distros. Jeremy Katz and David Zeuthen got the LiveCD technology to a suitable place, and along with Luke Macken, Fedora led the way in getting LiveUSB working. The LiveUSB Creator is now being used by other distributions than just Fedora, so this is a great success story.
Once all of that was done, I felt like it was time to bring in the next Fedora Project Leader, and that happened about a year ago when Paul Frields took over and I moved into my current role.
Where do you think the Fedora project will be in 2 or 3 years?
Well, I hope that Fedora continues to thrive, and that the community building techniques that we use in Fedora will continue to get better, and spread around the world. I hope that Fedora contributors take the processes and transparent methods that we use in our day to day work to whatever other walks of life they enter, and that over time the things that make open source communities successful will transform other parts of the world, especially government and finance.
Do you enjoy Amsterdam? Why did you relocate to the Netherlands?
Part of my job is to help grow and facilitate the Red Hat communities in Europe -- this includes Fedora, but also assisting other Red Hat folks with whatever they need from an open source and community evangelism point of view. It's been a wonderful opportunity to live and work outside of the United States for a while.
Amsterdam was chosen in part because it is very centrally located and it is easy to get to anywhere both in the city and in Europe without a car. It was partially chosen because Red Hat has an office nearby. It was partially chosen because there is a very low language barrier (I only speak English and Spanish), and it was partially chosen because I thought it would be a cool place to live!
This interview is licensed under a Creative Commons Attribution 2.0 Belgium License.
Interview: Simo Sorce
Simo Sorce will give a talk about FreeIPA identity management at FOSDEM 2009.
Could you briefly introduce yourself?
My name is Simo Sorce, I have been an active free Software developer for many years now. My main contributions have historically been within the Samba project of which I am a Team Member, and currently the GPL compliance officer, since 2001. I am currently working mostly on FreeIPA, where I am one of the leading software architects. FreeIPA is a project that aims to provide an Identity Management System easy to use and setup, using an LDAP directory and Kerberos, along with other related technologies. Most of the software we are using or writing to make IdM easy has been available for a long time, but it has always been too complex to setup to widespread. We hope that building a coherent set of tools and standards around known components and making it easy to use the final product, can help adoption of modern, and most importantly Free Software based, tools, so that people don't get dependent on proprietary, lock-in heavy, solutions.
What will your talk be about, exactly?
I am going to give a brief introduction of FreeIPA. I will describe what challenges we faced in v1 and what we face for v2, and dive in technical details about the architecture and future developments.
What do you hope to accomplish by giving this talk ? What do you expect?
My objective is to make this project better known. We see a community of enthusiasts already forming around our project and we are eager to see more. We also hope to see help in porting the software to other distributions. It's not a huge task as building it from scratch, but so far we have concentrated ourselves only on Fedora and by extension (with minor adjustments) Red Hat Enterprise Linux. We would love to see other distributions help modifying install scripts and porting the missing components needed to build the full server and client bits.
How does the FreeIPA project compare with Novell Identity Manager?
For a start this is a Free Software project, not a proprietary technology of a single company. We think that is important because we certainly believe that something as important as an Identity Management solution that is at the core of the security of any organization should be free. In my opinion organizations must be empowered to audit, and manage their own risk. They must be able to support such a core infrastructure even if the developers that made it up suddenly disappear or change business for any reason. Only Free Software gives you both the technical and legal means to do so.
On the technical side Novell IdM has certainly had a longer history and on some specific points they may be currently technically superior, but we have an aggressive roadmap and we are re-using existing proven reliable tools as much as possible, and we think the 'P' (Policy) and 'A' (Audit) parts will soon be extremely interesting differentiators. We are also actively working with the Fedora distribution to make FreeIPA integration even better (IPA is currently already distributed in Fedora), and we will continue to do so for all components.
One of the goals of FreeIPA v2 is to address the barriers to v1 usage. Which barriers are these?
V1 has many limitations we want to address, it needs better integration with the clients to be effective in managing an organization security needs. From better centralized access control, to offline capabilities and policy distribution, naming system, etc... We are expanding v2 in all areas previously touched by v1 and more.
What does v2 add to the Identity functionality compared to v1?
One key piece for v2 is Machine Identity. In v1 we didn't have time to properly address the machine identity piece, but it is one of the key features of v2. To be able to manage machines and trust them you need to provide machines with an identity so that they can have kerberos principal and use it to identify the machine itself to IPA. Allows us to easily encrypt communication and provide policies to controlled hosts, so that domain wide security configurations can be easily controlled from the IPA console. We are also planning on adding a minimal CA to IPA, and tools to automatically deploy and renew x509 service certificates to machines. This will make much easier to keep track of your deployed certificates, obtain new ones, renew or revoke them at will.
Which initial Policy and Audit functionality will v2 have?
We are working hard on providing a core policy engine and console for v2, we are concentrating on distributing security policies within the IPA framework, but the policy engine is built to be able to touch just any configuration file you want it to. For audit we are concentrating on basic functionality to easily collect, safely transmit and store audit logs for managed clients.
The target date of FreeIPA v2 is April/May 2009. Is this realistic? The goals of v2 are really ambitious.
We have indeed very ambitious goals, I am not sure we will have the final v2 version ready by May, but I hope we will have at least a pre-release we can start to show off by that date.
How many developers are working on FreeIPA? Are these all Red Hat employees?
At the moment most developers are indeed on Red Hat payroll, although we have contributors from outside Red Hat and we definitely encourage people to participate if they are interested.
What's the difference between Red Hat Enterprise IPA and FreeIPA?
Red Hat Enterprise IPA is the supported version from Red Hat, we do through QA tests before releasing it (and in the process we often fix bugs that we then commit to FreeIPA), and provide related services to our paying customers. It is more or less like what Red Hat Enterprise Linux is with regard to Fedora. FreeIPA is our upstream, Red Hat Enterprise IPA is our branded and supported product.
This interview is licensed under a Creative Commons Attribution-No Derivative Works 2.0 Belgium License.
Interview: Victor Stinner
Victor Stinner will give a talk about Fusil at FOSDEM 2009.
Could you briefly introduce yourself?
I'm a 25 years old developer paid to write free software (GPL), but also hacking on free software in my free time. I help free projects to improve their security by fixing known bugs or finding new bugs.
What will your talk be about, exactly?
It's about fuzzing, my fuzzer Fusil, and the status of security in free software. Finding bugs is easy, but the problem is fixing them upstream.
What do you hope to accomplish by giving this talk ? What do you expect?
I will try to sensitize developers to security, because few developers are aware of security bugs. I also hope that some hackers will try Fusil (or any other fuzzer) on their programs!
What problem is Fusil trying to solve?
Using Fusil, it's easy to write a fuzzer. But Fusil has many features to collect many crashes during a night without human interaction. It stores all the information about the crash, generates a script to replay the crash and renames the directory with a very short description of the crash (e.g. "invalid_read-0x8fa0b4ff"). So it's easy to see duplicates and to reproduce a crash in gdb.
How difficult is it to write fuzzing programs?
Most fuzzers included in the Fusil project inject random bytes in files and try to open these files in the target program. Such fuzzers are simple and can be written in one hour. To improve the quality of such fuzzers (that is, to generate less false positives), you can add more rules to the existing probes (e.g. add a text pattern specific to the problem for the standard output).
Better fuzzers generate data using the specification of the format. There is for example a Python fuzzer generating random function calls with random arguments. Writing such fuzzers takes more time because you have to learn the format and implement an algorithm to generate the data. If you already know the format, it takes between one and four hours for a simple format.
But when the fuzzer is written, most of the time you will find bugs in less than one hour! And sometimes in less than one minute...
What kinds of faults can you find in programs with Fusil?
Don't expect automatic exploit generation :-) Fusil is dumb and just finds *bugs*: invalid memory read/write, timeout, deadlock, assertion/exception, etc. On a segfault (or other fatal error), the builtin debugger tries to analyze the error: it does for example display "invalid read from NULL (4 bytes)" instead of just "segfault".
You will have to read the source code of the target program to understand the problem and check the severity of the bug. Remember, on a server any denial of service is important because it slows down all applications!
Which bugs did you already find with Fusil?
The most funny was a bug in printf() in GNU libc. Funny because printf() is the most common function in the C language and the code is very old! Another nice bug was a denial of service in the ClamAV antivirus program. It's possible to write a loop in a block chain in the file system. Most programs detect such a loop, but not the old version of ClamAV. With a single small document (20 KB), ClamAV ate all memory and CPU time! See the crash list for a more complete list.
How active is the development of Fusil?
The last stable version was released two months ago. I'm working alone on the project in my free time, so the development is slow and depends on my motivation. E.g. last weeks I worked on different projets (hacking Python!).
How does Fusil compare with other fuzzing tools, e.g. Peach?
Most frameworks are specific to a program category or environment. PROTOS is for example dedicated to network stuff, Sulley targets closed source programs running on Windows, etc.
Fusil's typical target is a Linux command line program. It doesn't mean that it's impossible to write other fuzzers, but just that it will take more time :-) Fusil should work on any UNIX/BSD system, and maybe also on Windows. There are also fuzzers for the Linux kernel, MySQL server and Firefox.
This interview is licensed under a Creative Commons Attribution 2.0 Belgium License.