You are here

Making Drupal 8 fly

In my travels to talk about Drupal, everyone asks me about Drupal 8's performance and scalability. Modern websites are much more dynamic and interactive than 10 years ago, making it more difficult to build modern sites while also being fast. It made me realize that maybe I should write up a summary of some of the most exciting performance and scalability improvements in Drupal 8. After all, Drupal 8 will leapfrog many of its competitors in terms of how to architect and scale modern web applications. Many of these improvements benefit both small and large websites, but also allow us to build even bigger websites with Drupal.

More precise cache invalidation

One of the strategies we employ in making Drupal fast is "caching". This means we try to generate pages or page elements one time and then store them so future requests for those pages or page elements can be served faster. If an item is already cached, we can simply grab it without going through the building process again (known as a "cache hit"). Drupal stores each cache item in a "cache bin" (a database table, Memcache object, or whatever else is appropriate for the cache backend in use).

In Drupal 7 and before, when one of these cache items changes and it needs to be re-generated and re-stored (the cache gets "invalidated"), you can only delete a specific cache item, clear an entire cache bin, or use prefix-based invalidation. None of these three methods allow you to invalidate all cache items that contain data of, say, user 200. The only method that is going to suffice is clearing the entire cache bin, and this means that usually we invalidate way too much, resulting in poor cache hit ratios and wasted effort rebuilding cache items that haven't actually changed.

This problem is solved in Drupal 8 thanks to the concept of "cache tags": each cache item can have any number of cache tags. A cache tag is a compact string that describes the object being cached. Thanks to this extra metadata, we can now delete all cache items that use the user:200 cache tag, for example. This means we've deleted all the cache items we must delete, but not a single one more: optimal cache invalidation!

Drupal cache tags

Example cache tags for different cache IDs.

And don't worry, we also made sure to expose the cache tags to reverse proxies, so that efficient and accurate invalidation can happen throughout a site's entire delivery architecture.

More precise cache variation

While accurate cache invalidation makes caching more efficient, there is more we did to improve Drupal's caching. We also make sure that cached items are optimally varied. If you vary too much, duplicate cache entries will exist with the exact same content, resulting in inefficient usage of caches (low cache hit ratios). For example, we don't want a piece of content to be cached per user if it is the same for many users. If you vary too little, users might see incorrect content as two different cache entries might collide. In other words, you don't want to vary too much nor too little.

In Drupal 7 and before, it's easy to program any cached item to vary by user, by user role, and/or by page, and could even be configured through the UI for blocks. However, more targeted variations (such as by language, by country, or by content access permissions) were more difficult to program and not typically exposed in a configuration UI.

In Drupal 8, we introduced a Cache Context API to allow developers and site builders to express these variations and to make them automatically available in the configuration UI.

Drupal cache contexts

Server-side dynamic content substitution

Usually a page can be cached almost entirely except for a few dynamic elements. Often a page served to two different authenticated users looks identical except for a small "Welcome $name!" and perhaps their profile picture. In Drupal 7, this small personalization breaks the cacheability of the entire page (or rather, requires a cache context that's way too granular). Most parts of the page, like the header, the footer and certain blocks in the sidebars don't change often nor vary for each user, so why should you regenerate all those parts at every request?

In Drupal 8, thanks to the addition of #post_render_cache, that is no longer the case. Drupal 8 can render the entire page with some placeholder HTML for the name and profile picture. That page can then be cached. When Drupal has to serve that page to an authenticated user, it will retrieve it from the cache, and just before sending the HTML response to the client, it will substitute the placeholders with the dynamically rendered bits. This means we can avoid having to render the page over and over again, which is the expensive part, and only render those bits that need to be generated dynamically!

Client-side dynamic content substitution

Some things that Drupal has been rendering for the better part of a decade, such as the "new" and "updated" markers on comments, have always been rendered on the server. That is not ideal because these markers are different for every visitor and as a result, it makes caching pages with comments difficult.

The just-in-time substitution of placeholders with dynamic elements that #post_render_cache provides us can help address this. In some cases, as is the case with the comment markers, we can even do better and offload more work from the server to the client. In the case for comment markers, a certain comment is posted at a certain time — that doesn't vary per user. By embedding the comment timestamps as metadata in the DOM with a data-comment-timestamp="1424286665" attribute, we enable client-side JavaScript to render the comment markers, by fetching (and caching on the client side) the “last read" timestamp for the current user and simply comparing these numbers. Drupal 8 provides some framework code and API to make this easy.

A "Facebook BigPipe" render pipeline

With Drupal 8, we're very close to taking the client-side dynamic content substitution a step further, just like some of the world's largest dynamic websites do. Facebook has 1.35 billion monthly active users all requesting dynamic content, so why not learn from them?

The traditional page serving model has not kept up with the increase of highly personalized websites where different content is served to different users. In the traditional model, such as Drupal 7, the entire page is generated before it is sent to the browser: while Drupal is generating a page, the browser is idle and wasting its cycles doing nothing. When Drupal finishes generating the page and sends it to the browser, the browser kicks into action, and the web server is idle. In the case of Facebook, they use BigPipe. BigPipe delivers pages asynchronously instead; it parallelizes browser rendering and server processing. Instead of waiting for the entire page to be generated, BigPipe immediately sends a page skeleton to the the client so it can start rendering that. Then the remaining content elements are requested and injected into their correct place. From the user's perspective the page is rendered progressively. The initial page content becomes visible much earlier, which improves the perceived speed of the site.

We've made significant improvements to the way Drupal 8 renders pages (presentation). By default, Drupal 8 core still implements the traditional model of assembling these pieces into a complete page in a single server-side request, but the independence of each piece and the architecture of the new rendering pipeline enable different “render strategies" to be experimented with — different methods for dynamic content assembly, such as BigPipe, Edge Side Includes, or other ideas for making the most optimal use of client, server, content delivery networks and reverse proxies. In all those examples, the idea is that we can send the primary content first so the client can start rendering that. Then we send the remaining Drupal blocks, such as the navigation menu or a 'Related articles' block, and have the browser, content delivery network or reverse proxy assemble or combine these blocks into a page.

Drupal render pipeline

A snapshot of the Drupal 8 render pipeline diagram that highlights where alternative render strategies can be implemented.

Some early experiments by Wim Leers in Acquia's OCTO show that we can improve performance by a factor of about 2 compared to a recent Drupal 8 development snapshot. These breakthroughs are enabled by leveraging the various improvements we made to Drupal 8.

And much more

But that is not all. The Drupal community has actually done much more, including: complete asset dependency information (which allowed us to ensure zero JavaScript is loaded by default for anonymous users and send less data on AJAX requests), pluggable CSS/JS aggregation and minification (to support more optimal optimization algorithms), and more. We've also made sure Drupal 8 is fast by default, by having better defaults: CSS/JS aggregation enabled, JS assets being loaded from the bottom, block caching enabled, and so on.

All in all, there is a lot to look forward to in Drupal 8!

Special thanks to Acquia's Wim Leers, Alex Bronstein and Angie Byron for their contributions to this blog post.

Consumer Search running Drupal

ConsumerSearch.com has been redesigned and is now running Drupal. The site is a part of the About.com Group, a subsidiary of The New York Times Company.

ConsumerSearch.com gets about 5.5M unique visitors each month (and growing). I don't know what server infrastructure they run on, but with the help from Jeremy at Tag1 Consulting, they configured Drupal to rely heavily on memcached and Drupal's built-in aggressive caching mode. Knowing Jeremy, they are probably trying to serve cached pages from disk, rather than from the database.

Consumer search

Drupal in the cloud

It is not always easy to scale Drupal -- not because Drupal sucks, but simply because scaling the LAMP stack (including Drupal) takes no small amount of skill. You need to buy the right hardware, install load balancers, setup MySQL servers in master-slave mode, setup static file servers, setup web servers, get PHP working with an opcode cacher, tie in a distributed memory object caching system like memcached, integrate with a content delivery network, watch security advisories for every component in your system and configure and tune the hell out of everything.

Either you can do all of the above yourself, or you outsource it to a company that knows how to do this for you. Both are non-trivial and I can count the number of truly qualified companies on one hand. Tag1 Consulting is one of the few Drupal companies that excel at this, in case you're wondering.

My experience is that MySQL takes the most skill and effort to scale. While proxy-based solutions like MySQL Proxy look promising, I don't see strong signals about it becoming fundamentally easier for mere mortals to scale MySQL.

It is not unlikely that in the future, scaling a Drupal site is done using a radically different model. Amazon EC2, Google App Engine and even Sun Caroline are examples of the hosting revolution that is ahead of us. What is interesting is how these systems already seem to evolve: Amazon EC2 allows you to launch any number of servers but you are pretty much on your own to take advantage of them. Like, you still have to pick the operating system, install and configure MySQL, Apache, PHP and Drupal. Not to mention the fact that you don't have access to a good persistent storage mechanism. No, Amazon S3 doesn't qualify, and yes, they are working to fix this by adding Elastic IP addresses and Availability Zones. Either way, Amazon doesn't make it easier to scale Drupal. Frankly, all it does is making capacity planning a bit easier ...

Then comes along Amazon SimpleDB, Google App Engine and Sun Caroline. Just like Amazon EC2/S3 they provide instant scalability, only they moved things up the stack a level. They provide a managed application environment on top of a managed hosting environment. Google App Engine provides APIs that allow you to do user management, e-mail communication, persistent storage, etc. You no longer have to worry about server management or all of the scale-out configuration. Sun Caroline seems to be positioned somewhere in the middle -- they provide APIs to provision lower level concepts such as processes, disk, network, etc.

Unfortunately for Drupal, Google App Engine is Python-only, but more importantly, a lot of the concepts and APIs don't map onto Drupal. Also, the more I dabble with tools like Hadoop (MapReduce) and CouchDB, the more excited I get, but the more it feels like everything that we do to scale the LAMP stack is suddenly wrong. I'm trying hard to think beyond the relational database model, but I can't figure out how to map Drupal onto this completely different paradigm.

So while the center of gravity may be shifting, I've decided to keep an eye on Amazon's EC2/S3 and Sun's Caroline as they are "relational database friendly". Tools like Elastra are showing a lot of promise. Elastra claims to be the world's first infinitely scalable solution for running standard relational databases in an on-demand computing cloud. If they deliver what they promise, we can instantly scale Drupal without having to embrace a different computing model and without having to do all of the heavy lifting. Specifically exciting is the fact that Elastra teamed up with EnterpriseDB to make their version of PostgreSQL virtually expand across multiple Amazon EC2 nodes. I've already reached out to Elastra, EnterpriseDB and Sun to keep tabs on what is happening.

Hopefully, companies like Elastra, EnterpriseDB, Amazon and Sun will move fast because I can't wait to see relational databases live in the cloud ...

Database replication lag

As explained in an earlier blog post, we recently started using MySQL master-slave replication on drupal.org in order to provide the scalability necessary to accommodate our growing demands. With one or more replicas of our database, we can instruct Drupal to distribute or load balance the SQL workload among different database servers.

MySQL's master-slave replication is an asynchronous replication model. Typically, all the mutator queries (like INSERT, UPDATE, DELETE) go to a single master, and the master propagates all updates to the slave servers without synchronization or communication. While the asynchronous nature has its advantages, it is also means that the slaves might be (slightly) out of sync.

Consider the following pseudo-code:

$nid = node_save($data);
$node = node_load($nid);

Because node_save() executes a mutator query (an INSERT or UPDATE statement) is has to be executed on the master, so the master can propagate the changes to the slaves. Because node_load() uses a read-only query, it can go to the master or any of the available slaves. Because of the lack of synchronization between master and slaves, there is one obvious caveat: when we execute node_load() the slaves might not have been updated. In other words, unless we force node_load() to query the master, we risk not being able to present the visitor the data that he just saved. In other cases, we risk introducing data inconsistencies due to the race conditions.

So what is the best way to fix this?

  1. Our current solution on drupal.org is to execute all queries on the master, except for those that we know can't introduce race conditions. In our running example, this means that we'd chose to execute all node_load()s on the master, even in absence of a node_save(). This limits our scalability so this is nothing but a temporary solution until we have a good solution in place.
  2. One way to fix this is to switch to a synchronous replication model. In such a model, all database changes will be synchronized across all servers to ensure that all replicas are in a consistent state. MySQL provides a synchronous replication model through the NDB cluster storage engine. Stability issues aside, MySQL's cluster technology works best when you avoid JOINs and sub-queries. Because Drupal is highly relational, we might have to rewrite large parts of our code base to get the most out of it.
  3. Replication and load balancing can be left to some of the available proxy layers, most notably Continuent's Sequoia and MySQL Proxy. Drupal connects to the proxy as if it was the actual database, and the proxy talks to the underlying databases. The proxy parses all the queries and propagates mutator queries to all the underlying databases to make sure they remain in a consistent state. Reads are only distributed among the servers that are up-to-date. This solution is transparent to Drupal, and should work with older versions of Drupal. The only downside is that it not trivial to setup, and based on my testing, it requires quite a bit of memory.
  4. We could use database partitioning, and assign data to different shards in one way or another. This would reduce the replica lag to zero but as we don't have that much data or database tables with millions of rows, I don't think partitioning will buy drupal.org much.
  5. Another solution is to rewrite large parts of Drupal so it is "replication lag"-aware. In its most naive form, the node_load() function in our running example would get a second parameter that specifies whether the query should be executed on the master or not. The call should then be changed to node_load($nid, TRUE) when proceeded by a node_save().
    I already concluded through research that this is not commonly done; probably because such a solution still doesn't provide any guarantees across page request.
  6. A notable exception is MediaWiki, the software behind Wikipedia which has some documented best practices to deal with replication lag. Specifically, they recommend to query the master (using a very fast query) to see what version of the data they have to retrieve from the slave. If the specified version is not yet available on the slave due to replication lag, they simply wait for it to become available. In our running example, each node should get a version number and node_load() would first retrieve the latest version number from the master and then use that version number to make sure it gets an up-to-date copy from the slave. If the right version isn't yet available, node_load() will try loading the data again until it becomes available.

YSlow

Yahoo! released YSlow, a Firefox extension that integrates with the popular Firebug tool. YSlow was originally developed as an internal tool at Yahoo! with the help of Steve Souders, Chief Performance at Yahoo! and author of O'Reilly's High Performance Websites book.

YSlow analyzes the front-end performance of your website and tells you why it might be slow. For each component of a page (images, scripts, stylesheets) it checks its size, whether it was gzipped, the Expires-header, the ETag-header, etc. YSlow takes all this information into account and computes a performance grade for the page you are analyzing.

YSlow

The current <a href="http://developer.yahoo.com/yslow/">YSlow</a> score for the <a href="http://drupal.org">drupal.org front page</a> is 74 (C). YSlow suggests that we reduce the number of CSS background images using <a href="http://alistapart.com/articles/sprites">CSS sprites</a>, that we use a Content Delivery Network (CDN) like <a href="http://akamai.com">Akamai</a> for delivering static files, and identifies an Apache configuration issue that affects the <em>Entity Tags</em> or <em>ETags</em> of static files. The problem is that, by default, Apache constructs ETags using attributes that make them unique to a specific server. A stock Apache embeds <em>inode numbers</em> in the ETag which dramatically reduces the odds of the validity test succeeding on web sites with multiple servers; the ETags won't match when a browser gets the original component from server A and later tries to validate that component on server B.

Here are some other YSlow scores (higher is better):

From what I have seen, Apache configuration issues, and not CMS implementation issues, are the main source of low YSlow scores. Be careful not to draw incorrect conclusions from these numbers; they are often not representative for the CMS software itself.

And it doesn't change the fact that drupal.org is currently a lot slower than most of these other sites. That is explained by drupal.org's poor back-end performance, and not by the front-end performance as measured by YSlow. (We're working on adding a second database server to drupal.org.)

Pages