Interview: David Recordon
David Recordon will give a talk about scaling Facebook with open source tools at FOSDEM 2010.
Could you briefly introduce yourself?
Over the past few years I've had a large interest in the evolution of the web to become social. I helped to create both OpenID and OAuth and got into web programming via open source. My first real job was hacking on LiveJournal, which was entirely open source, and through it learned quite a bit about open source from Brad Fitzpatrick. I joined Facebook about six months ago and lead our open source and web standards team. We're responsible for making sure that it's easy for engineers to use, contribute to, and release open source code as well as web standards and that the company does an amazing job at it.
Scott McVicar: I'm a core developer of the PHP language, working with the internals to improve performance of the engine and adding some new features. I also run the test festival and Google Summer of Code program for the group. I recently started at Facebook and currently own anything related to PHP and Open Source that we have. Previously I worked on the forum software vBulletin for the last 7 years seeing it several major iterations.
What will your talk be about, exactly?
We're planning to talk about how Facebook has been able to scale to over 350 million monthly active users via open source software. Some of this infrastructure was developed outside of Facebook, but we've also released about a half-dozen core pieces of infrastructure we've developed. If it wasn't for the LAMP stack, Mark Zuckerberg never could have built Facebook from his Harvard dorm room. This is a common story for many sites that we all use every day.
What do you hope to accomplish by giving this talk? What do you expect?
In general, we're hoping that open source developers see just how far the software we all create can go. Today anyone can create the next Facebook or Twitter, not just giant companies who can spend a lot of money on expensive hardware. Over one million developers have built Facebook Platform applications based on our infrastructure, we want to make this possible for anyone anywhere.
Can you give some numbers to put the scaling issues of Facebook into perspective? How much data is stored, how many servers used, how much bandwith, ...?
In general we don't talk a lot about specific numbers when it comes to servers and bandwidth. We store over two petabytes of data within our Hadoop cluster alone, log over thirty terabytes of data per day via Scribe, and at peak our memcached cluster serves over three-hundred million objects per second.
What's the bottleneck in Facebook's architecture? How did the architecture evolve?
We're constantly identifying and removing bottlenecks in our systems, and generally aim to balance them so we're not significantly more bottlenecked in one particular resource than any others. For example there's an optimal amount of memory per cpu on a memcached server for any particular workload, where you run out of space at the same point you run out of operations per second.
Our hardest scaling problems tend to be around our data, which grows as new users join, as existing users interact with each other on the site, and as we add features to the site. The request rate to this data is also very high, because each page served depends on hundreds or even thousands of pieces of data. The data is hard to cluster or partition because users don't come to the site to look at their own data, they come to look at and interact with their friends' data. This means that even a small percentage of users online are interacting with a large percentage of the data set.
Can you give an overview of the free and open source software that Facebook is built upon?
The architecture has been based on open source from the start: Apache, PHP, MySQL, memcached. At the top level it looks very similar to how it did five years ago, but each of these systems has been heavily modified over the years.
For example we built systems in memcache and mysql to handle geographically remote datacenters and we've significantly improved the performance of memcache. In some areas we've built custom solutions where there was no open source alternative (for example Scribe which we've since open sourced) or where the specifics of our application could give us a large performance boost. As an example our photo storage system Haystack gets many times the throughput of a traditional file server with small files, but depends on the fact that our photo files never change.
In March 2009, before you joined Facebook, you predicted that Facebook would become the most open social network. Now that you have been Senior Open Programs Manager at Facebook for a few months, has this prediction come true already?
In a lot of ways Facebook has been pushing the edge of what's possible with social networking for the past few years. This means that we've often developed new technologies before there were standards to interoperate with. Part of my team's role is taking a step back and looking at where there are standards that we can help evolve as well as help to adopt or develop new technologies with others from the start.
Have you enjoyed previous FOSDEM editions?
Honestly, Scott and I have never attended FOSDEM previously. Our VP of Engineering Mike Schroepfer spoke a few years ago and absolutely loved it!
This interview is licensed under a Creative Commons Attribution 2.0 Belgium License.