I recently extracted some data from the Drupal project's CVS and Git logs to see how the number of code contributors and total contributions have changed over time. If there was any doubt of our continual growth, the resulting charts demolish it.
Aggregated results from core and contributed modules.
Aggregated results from core and contributed modules.
As can be seen from the graphs, there is a pretty big spike in commit activity post-Git migration.
We opened the Drupal 7 development branch in February 2008, and released Drupal 7.0 in January 2011. This graph shows the stacked commit history from beginning to end. I appointed Angie as my Drupal 7 co-maintainer in August 2008 after having been the sole committer for 7 months. The peak around August 2009 (the highest peak) was the first attempted Drupal 7 code freeze. The momentum steadily built up towards the initial code freeze date. Interestingly, we remained most productive during the extended code freeze period ... more code freezes are better than one code freeze? ;-)
I averaged at 3.6 commits per day, whereas Angie's average is 2.6 commits per day (including weekends and holidays).
Roughly 8 months ago at DrupalCon Paris, we launched Acquia Hosting. In this blog post, I wanted to give a quick update on where we are after 8 months.
For those who don't know, Acquia Hosting is a highly-available cloud-based hosting platform tuned for Drupal performance and scalability. From a technology point of view, we've built tools to automatically launch multi-server hosting environments optimized for Drupal. It is built on Amazon Web Services (i.e. Amazon EC2, Amazon S3, etc) using Open Source components such as Varnish, Puppet, GlusterFS, NginX and more. If you are interested in the technical details, I highly recommend watching Barry Jaspan's DrupalCon San Francisco presentation on the challenges of hosting Drupal on AWS -- I'm biased, but it is the best technical presentation that I've seen on hosting websites on Amazon Web Service (AWS). Highly recommended. The presentation is the result of 2.5 years of experience building products exclusively on Amazon Web Services and having to maintain close to 200 EC2 instances.
At Acquia, we're all very proud of what we've built. For example, we were recently able to have a new, enterprise-scale Acquia Hosting customer online only a few days after they first contacted us. It takes most hosting companies weeks or months to roll out, configure and tweak all the servers required to host a high-traffic traffic sites like this one. In just a few days, we scaled past the limits of their previous hosting provider and flawlessly served 3 million page views per hour (i.e. 830+ page views per second or 5000+ HTTP requests per second -- yes, Drupal scales). I hope the customer will allow us to write-up a detailed case study at some point. It is a real success story for Drupal, Acquia Hosting, Amazon Web Services and cloud computing in general: incredible time to market, great performance and scalability. We've come a long way since we started working on a Drupal hosting product about a year ago.
The way we started work on Acquia Hosting is the way we have continued: with a very strong focus on engineering. Our first area of focus was on reliability. The results of this were: providing multiple, redundant web nodes; real-time database replication; backups; monitoring infrastructure (we track 25+ system parameters); customer isolation, and so on. Next, we focused our efforts on improving Acquia Hosting's performance by adding tools like Varnish for page caching; customer isolation; reorganizing parts of our underlying architecture; lots of tweaking to Apache, PHP and MySQL; and repeated rounds of realistic load testing. Along the way, we developed deployment tools to make it easy to roll-out and automatically configure our customers' EC2 instances -- it takes just a matter of minutes to upgrade a site's capacity.
Considering our costs and other metrics eight months into the hosting business adventure, the real value of our hosting offering comes not from the technology alone, but rather from our support team's work while getting customers' sites online and helping them day in, day out. Once servers are provisioned for a new site, getting customers up-and-running involves detailed site audits (making sure they don't have core hacks, analyzing their site architecture, etc.), teaching them how the Acquia Hosting environment works, helping them learn to best leverage clusters of servers, doing load testing, and helping them get over performance bottlenecks (slow or excessive SQL queries, expensive uncached Views or blocks, etc.). At the end of the day, our team's deep knowledge of Drupal and our technology stack are the essence and ultimate value-proposition of our Drupal hosting offering.
Going forward, a top priority is to make the process of getting new customers online easier for us and better for them. Among other things, that means developing more "self-service"-style systems, improved customer dashboards and documentation, and streamlined, focused support operations to make sure our customers are getting their questions answered and their problems fixed in the shortest time possible so they can worry about their business and not their websites.
Free Acquia Hosting program
We also announced a free Acquia Hosting program. To help support the Drupal community, we give free Acquia Hosting to sites for non-profit groups that promote Drupal use and adoption. We're now hosting 25 community websites including Drupal Edu, SpreadDrupal, Drupal Dojo, Drupal Catalan, Design 4 Drupal Boston and more. There are about 50 more Drupal community sites in the backlog waiting to get setup with an Acquia Hosting account. Yet another reason to make it easier to get new users and customers up and running!
Two weeks ago at DrupalCon San Francisco I gave my traditional state of Drupal presentation. A total of 6000 people watched my keynote live; 3000 were present at DrupalCon, and another 3000 watched the live video stream. Nonetheless, a lot of people asked me for my slides. So in good tradition, you can download a copy of my slides (PDF, 48 MB) or you can watch a video recording of my keynote on archive.org.
More proof that speed as perceived by the end user matters. This time from a Google Research paper (PDF). Google's experiments demonstrate that increasing web search latency 100 to 400 ms reduces the daily number of searches per user by 0.2% to 0.6%. Furthermore -- and this is where it gets really interesting -- users do fewer searches the longer they are exposed. In other words, the cost of slower performance increases over time and persists.
To use Peter Van Dijck’s words:
In other words, if your website is a little slower, users will use it less (we knew that), but they'll also use it less and less over time, and when it speeds up again, they’ll still use it less than before the slowdown.
Based on their observations, Google suggests site builders to think twice about adding a feature that hurts performance if the benefit of the feature is unproven.