You are here

A history of JavaScript across the stack

Did you know that JavaScript was created in 10 days? In May 1995, Brendan Eich wrote the first version of JavaScript in 10 days while working at Netscape.

For the first 10 years of JavaScript's life, professional programmers denigrated JavaScript because its target audience consisted of "amateurs". That changed in 2004 with the launch of Gmail. Gmail was the first popular web application that really showed off what was possible with client-side JavaScript. Competing e-mail services such as Yahoo! Mail and Hotmail featured extremely slow interfaces that used server-side rendering almost exclusively, with almost every action by the user requiring the server to reload the entire web page. Gmail began to work around these limitations by using XMLHttpRequest for asynchronous data retrieval from the server. Gmail's use of JavaScript caught the attention of developers around the world. Today, Gmail is the classic example of a single-page JavaScript app; it can respond immediately to user interactions and no longer needs to make roundtrips to the server just to render a new page.

A year later in 2005, Google launched Google Maps, which used the same technology as Gmail to transform online maps into an interactive experience. With Google Maps, Google was also the first large company to offer a JavaScript API for one of their services allowing developers to integrate Google Maps into their websites.

Google's XMLHttpRequest approach in Gmail and Google Maps ultimately came to be called Ajax (originally "Asynchronous JavaScript and XML"). Ajax described a set of technologies, of which JavaScript was the backbone, used to create web applications where data can be loaded in the background, avoiding the need for full page refreshes. This resulted in a renaissance period of JavaScript usage spearheaded by open source libraries and the communities that formed around them, with libraries such as Prototype, jQuery, Dojo and Mootools. (We added jQuery to Drupal core as early as 2006.)

In 2008, Google launched Chrome with a faster JavaScript engine called V8. The release announcement read: "We also built a more powerful JavaScript engine, V8, to power the next generation of web applications that aren't even possible in today's browsers.". At the launch, V8 improved JavaScript performance by 10x over Internet Explorer by compiling JavaScript code to native machine code before executing it. This caught my attention because I had recently finished my PhD thesis on the topic of JIT compilation. More importantly, this marked the beginning of different browsers competing on JavaScript performance, which helped drive JavaScript's adoption.

In 2010, Twitter made a move unprecedented in JavaScript's history. For the Twitter.com redesign in 2010, they began implementing a new architecture where substantial amounts of server-side code and client-side code were built almost entirely in JavaScript. On the server side, they built an API server that offered a single set of endpoints for their desktop website, their mobile website, their native apps for iPhone, iPad, and Android, and every third-party application. As a result, they moved much of the UI rendering and corresponding logic to the user's browser. A JavaScript-based client fetches the data from the API server and renders the Twitter.com experience.

Unfortunately, the redesign caused severe performance problems, particularly on mobile devices. Lots of JavaScript had to be downloaded, parsed and executed by the user's browser before anything of substance was visible. The "time to first interaction" was poor. Twitter's new architecture broke new ground by offering a number of advantages over a more traditional approach, but it lacked support for various optimizations available only on the server.

Twitter suffered from these performance problems for almost two years. Finally in 2012, Twitter reversed course by passing more of the rendering back to the server. The revised architecture renders the initial pages on the server, but asynchronously bootstraps a new modular JavaScript application to provide the fully-featured interactive experience their users expect. The user's browser runs no JavaScript at all until after the initial content, rendered on the server, is visible. By using server-side rendering, the client-side JavaScript could be minimized; fewer lines of code meant a smaller payload to send over the wire and less code to parse and execute. This new hybrid architecture reduced Twitter's page load time by 80%!

In 2013, Airbnb was the first to use Node.js to provide isomorphic (also called universal or simply shared) JavaScript. In the Node.js approach, the same framework is identically executed on the server side and client side. On the server side, it provides an initial render of the page, and data could be provided through Node.js or through REST API calls. On the client side, the framework binds to DOM elements, "rehydrates" (updates the initial server-side render provided by Node.js) the HTML, and makes asynchronous REST API calls whenever updated data is needed.

The biggest advantage Airbnb's JavaScript isomorphism had over Twitter's approach is the notion of a completely reusable rendering system. Because the client-side framework is executed the same way on both server and client, rendering becomes much more manageable and debuggable in that the primary distinction between the server-side and client-side renders is not the language or templating system used, but rather what data is provisioned by the server and how.

Universal Javascript
In a universal JavaScript approach utilizing shared rendering, Node.js executes a framework (in this case Angular), which then renders an initial application state in HTML. This initial state is passed to the client side, which also loads the framework to provide further client-side rendering that is necessary, particularly to “rehydrate” or update the server-side render.

From a prototype written in 10 days to being used across the stack by some of the largest websites in the world, long gone are the days of clunky browser implementations whose APIs changed depending on whether you were using Netscape or Internet Explorer. It took JavaScript 20 years, but it is finally considered an equal partner to traditional, well-established server-side languages.

When traffic skyrockets your site shouldn't go down

This week's Grammy Awards is one of the best examples of the high traffic events websites that Acquia is so well known for. This marks the fourth time we hosted the Grammys' website. We saw close to 5 million unique visitors requesting nearly 20 million pages on the day of the awards and the day after. From television's Emmys to Superbowl advertisers' sites, Acquia has earned its reputation for keeping their Drupal sites humming during the most crushing peaks of traffic.

These "super spikes" aren't always fun. For the developers building these sites to the producers updating each site during the event, nothing compares to the sinking feeling when a site fails when it is needed the most. During the recent Superbowl, one half-time performer lost her website (not on Drupal), giving fans the dreaded 503 Service Unavailable error message. According to CMSWire: "Her website was down well past midnight for those who wanted to try and score tickets for her tour, announced just after her halftime show performance". Yet for Bruno Mars' fans, his Acquia-based Drupal site kept rolling even as millions flooded his site during the half-time performance.

For the Grammys, we can plan ahead and expand their infrastructure prior to the event. This is easy thanks to Acquia Cloud's elastic platform capacity. Our technical account managers and support teams work with the producers at the Grammys to make sure the right infrastructure and configuration is in place. Specifically, we simulate award night traffic as best we can, and use load testing to prepare the infrastructure accordingly. If needed, we add additional server capacity during the event itself. Just prior to the event, Acquia takes a 360 degree look at the site to ensure that all of the stakeholders are aligned, whether internal to Acquia or external at a partner. We have technical staff on site during the event, and remote teams that provide around the clock coverage before and after the event.

Few people know what goes on behind the scenes during these super spikes, but the biggest source of pride is that our work is often invisible; our job well done means that our customer's best day, didn't turn into their worst day.

In memoriam: Richard Burford

It is with great sadness that we learned last week that Richard Burford has passed away. This is a tragic loss for his family, for Acquia, the Drupal community, and the broader open source world. Richard was a Sr. Software Engineer at Acquia for three and a half years (I still remember him interviewing with me), and known as psynaptic in the Drupal community. Richard has been a member of the Drupal community for 9+ years. During that time, he contributed hundreds of patches across multiple projects, started a Drupal user group in his area and helped drive the Drupal community in the UK where he lived. Richard was a great person, a dedicated and hard-working colleague, a generous contributor to Drupal, and a friend. Richard was 36 years young with a wife and 3 children. He was the sole income earner for the family so a fundraising campaign has been started to help out his family during these difficult times; please consider contributing.

Turning Drupal outside-in

There has been a lot of discussion around the future of the Drupal front end both on Drupal.org (#2645250, #2645666, #2651660, #2655556) and on my blog posts about the future of decoupled Drupal, why a standard framework in core is a good idea, and the process of evaluating frameworks. These all relate to my concept of "progressive decoupling", in which some portions of the page are handed over to client-side logic after Drupal renders the initial page (not to be confused with "full decoupling").

My blog posts have drawn a variety of reactions. Members of the Drupal community, including Lewis Nyman, Théodore Biadala and Campbell Vertesi, have written blog posts with their opinions, as well as Ed Faulkner of the Ember community. Last but not least, in response to my last blog post, Google changed Angular 2's license from Apache to MIT for better compatibility with Drupal. I read all the posts and comments with great interest and wanted to thank everyone for all the feedback; the open discussion around this is nothing short of amazing. This is exactly what I hoped for: community members from around the world brainstorming about the proposal based on their experience, because only with the combined constructive criticism will we arrive at the best solution possible.

Based on the discussion, rather than selecting a client-side JavaScript framework for progressive decoupling right now, I believe the overarching question the community wants to answer first is: How do we keep Drupal relevant and widen Drupal's adoption by improving the user experience (UX)?

Improving Drupal's user experience is a topic near and dear to my heart. Drupal's user experience challenges led to my invitation to Mark Boulton to redesign Drupal 7, the creation of the Spark initiative to improve the authoring experience for Drupal 8, and continued support for usability-related initiatives. In fact, the impetus behind progressive decoupling and adopting a client-side framework is the need to improve Drupal's user experience.

It took me a bit longer than planned, but I wanted to take the time to address some of the concerns and share more of my thoughts about improving Drupal's UX (and JavaScript frameworks).

To iterate or to disrupt?

In his post, Lewis writes that the issues facing Drupal's UX "go far deeper than code" and that many of the biggest problems found during the Drupal 8 usability study last year are not resolved with a JavaScript framework. This is true; the results of the Drupal 8 usability study show that Drupal can confuse users with its complex mental models and terminology, but it also shows how modern features like real-time previews and in-page block insertion are increasingly assumed to be available.

To date, much of our UX improvements have been based on an iterative process, meaning it converges on a more refined end state by removing problems in the current state. However, we also require disruptive thinking, which is about introducing entirely new ideas, for true innovation to happen. It's essentially removing all constraints and imagining what an ideal result would look like.

I think we need to recognize that while some of the documented usability problems coming out of the Drupal 8 usability study can be addressed by making incremental changes to Drupal's user experience (e.g. our terminology), other well-known usability problems most likely require a more disruptive approach (e.g. our complex mental model). I also believe that we must acknowledge that disruptive improvements are possibly more impactful in keeping Drupal relevant and widening Drupal's adoption.

At this point, to get ahead and lead, I believe we have to do both. We have to iterate and disrupt.

From inside-out to outside-in

Let's forget about Drupal for a second and observe the world around us. Think of all the web applications you use on a regular basis, and consider the interaction patterns you find in them. In popular applications like Slack, the user can perform any number of operations to edit preferences (such as color scheme) and modify content (such as in-place editing) without incurring a single full page refresh. Many elements of the page can be changed without the user's flow being interrupted. Another example is Trello, in which users can create new lists on the fly and then add cards to them without ever having to wait for a server response.

Contrast this with Drupal's approach, where any complex operation requires the user to have detailed prior knowledge about the system. In our current mental model, everything begins in the administration layer at the most granular level and requires an unmapped process of bottom-up assembly. A user has to make a content type, add fields, create some content, configure a view mode, build a view, and possibly make the view the front page. If each individual step is already this involved, consider how much more difficult it becomes to traverse them in the right order to finally see an end result. While very powerful, the problem is that Drupal's current model is "inside-out". This is why it would be disruptive to move Drupal towards an "outside-in" mental model. In this model, I should be able to start entering content, click anything on the page, seamlessly edit any aspect of its configuration in-place, and see the change take effect immediately.

Drupal 8's in-place editing feature is actually a good start at this; it enables the user to edit what they see without an interrupted workflow, with faster previews and without needing to find what thing it is before they can start editing.

Making it real with content modeling

Eight years ago in 2007, I wrote about a database product called DabbleDB. I shared my belief that it was important to move CCK and Views into Drupal's core and learn from DabbleDB's integrated approach. DabbleDB was acquired by Twitter in 2010 but you can still find an eight-year-old demo video on YouTube. While the focus of DabbleDB is different, and the UX is obsolete, there is still a lot we can learn from it today: (1) it shows a more integrated experience between content creation, content modeling, and creating views of content, (2) it takes more of an outside-in approach, (3) it uses a lot less intimidating terminology while offering very powerful capabilities, and (4) it uses a lot of in-place editing. At a minimum, DabbleDB could give us some inspiration for what a better, integrated content modeling experience could look like, with the caveat that the UX should be as effortless as possible to match modern standards.

Other new data modeling approaches with compelling user experiences have recently entered the landscape. These include back end-as-a-service (BEaaS) solutions such as Backand, which provides a visually clean drag-and-drop interface for data modeling and helpful features for JavaScript application developers. Our use cases are not quite the same, but Drupal would benefit immensely from a holistic experience for content modeling and content views that incorporates both the rich feature set of DabbleDB and the intuitive UX of Backand.

This sort of vision was not possible in 2007 when CCK was a contributed module for Drupal 6. It still wasn't possible in Drupal 7 when Views existed as a separate contributed module. But now that both CCK and Views are in Drupal 8 core, we can finally start to think about how we can more deeply integrate the two. This kind of integration would be nontrivial but could dramatically simplify Drupal's UX. This should be really exciting because so many people are attracted to Drupal exactly because of features like CCK and Views. Taking an integrated approach like DabbleDB, paired with a seamless and easy-to-use experience like Slack, Trello and Backand, is exactly the kind of disruptive thinking we should do.

What most of the examples above have in common are in-place editing, immediate previews, no page refreshes, and non-blocking workflows. The implications on our form and render systems of providing configuration changes directly on the rendered page are significant. To achieve this requires us to have robust state management and rendering on the client side as well as the server side. In my vision, Twig will provide structure for the overall page and non-interactive portions, but more JavaScript will more than likely be necessary for certain parts of the page in order to achieve the UX that all users of the web have come to expect.

We shouldn't limit ourselves to this one example, as there are a multitude of Drupal interfaces that could all benefit from both big and small changes. We all want to improve Drupal's user experience — and we have to. To do so, we have to constantly iterate and disrupt. I hope we can all collaborate on figuring out what that looks like.

Special thanks to Preston So and Kevin O'Leary for contributions to this blog post and to Wim Leers for feedback.

Drupal: 15 years old and still gaining momentum

On December 29, 2000, I made a code commit that would change my life; it is in this commit that I called my project "Drupal" and added the GPL license to it.

Drupal name and license
The commit where I dubbed my website project "Drupal" and added the GPL license.

A couple weeks later, on January 15, 2001, exactly 15 years ago from today, I released Drupal 1.0.0 into the world. The early decisions to open-source Drupal and use the GPL license set the cornerstone principles for how our community shares with one another and builds upon each other's achievements to this day.

Drupal is now 15 years old. In internet terms, that is an eternity. In 2001, only 7 percent of the world's population had internet access. The mobile internet had not entered the picture, less than 50% of the people in the United States had a mobile phone, and AT&T had just introduced text messaging. People searched the web with Lycos, Infoseek, AltaVista and Hot Bot. Google -- launched in 1998 as a Stanford University research project -- was still a small, private company just beginning its rise to prominence. Google AdWords, now a $65 billion business, had less than 500 customers when Drupal launched. Chrome, Firefox, and Safari didn't exist yet; most people used Netscape, Opera or Internet Explorer. New ideas for sharing and exchanging content such as "public diaries" and RSS had yet to gain widespread acceptance and Drupal was among the first to support those. Wikipedia was launched on the same day as Drupal and sparked the rise of user-generated content. Facebook and Twitter didn't exist until 4-5 years later. Proprietary software vendors started to feel threatened by open source; most didn't understand how a world-class operating system could coalesce out of part-time hacking by several thousand developers around the world.

Looking back, Drupal has not only survived massive changes in our industry; it has also helped drive them. Over the past decade and a half, I've seen many content management systems emerge and become obsolete: Vignette, Interwoven, PHP-Nuke, and Scoop were all popular at some point in the past but Drupal has outlived them all. A big reason is from the very beginning we have been about constant evolution and reinvention, painful as it is.

Keeping up with the pace of the web is a funny thing. Sometimes you'll look back on choices made years ago and think, "Well, I'm glad that was the right decision!". For example, Drupal introduced "hooks" and "modules" early on, concepts that are commonplace in today's platforms. At some point, you could even find some of my code in WordPress, which Matt Mullenweg started in 2003 with some inspiration from Drupal. Another fortuitous early decision was to focus Drupal on the concept of "nodes" rather than "pages". It wasn't until 10 years later with the rise of mobile that we started to see the web revolve less and less around pages. A node-based approach makes it possible to reuse content in different ways for different devices. In a way, much of the industry is still catching up to that vision. Even though the web is a living, breathing thing, there is a lot of things that we got right.

Other times, we got it wrong. For example, we added support for OpenID, which never took off. In the early days I focused, completely and utterly, on the aesthetics of Drupal's code. I spent days trying to do something better, with fewer lines of code and more elegant than elsewhere. But in the process, I didn't focus enough on end-user usability, shunned JavaScript for too long, and later tried to improve usability by adding a "dashboard" and "overlay".

In the end, I feel fortunate that our community is willing to experiment and break things to stay relevant. Most recently, with the release of Drupal 8, we've made many big changes that will fuel Drupal's continued adoption. I believe we got a lot of things right in Drupal 8 and that we are on the brink of another new and bright era for Drupal.

I've undergone a lot of personal reinvention over the past 15 years too. In the early days, I spent all my time writing code and building Drupal.org. I quickly learned that a successful open source project requires much more than writing code. As Drupal started to grow, I found myself an "accidental leader" and worried about our culture, scaling the project, attracting a strong team of contributors, focusing more and more on Drupal's end-users, growing the commercial ecosystem around Drupal, starting the Drupal Association, and providing vision. Today, I wear a lot of different hats: manager of people and projects, evangelist, fundraiser, sponsor, public speaker, and BDFL. At times, it is difficult and overwhelming, but I would not want it any other way. I want to continue to push Drupal to reach new heights and new goals.

Today we risk losing much of the privacy, serendipity and freedom of the web we know. As the web evolves from a luxury to a basic human right, it's important that we treat it that way. To increase our impact, we have to continue to make Drupal easier to use. I'd love to help build a world where people's privacy is safe and Drupal is more approachable. And as the pace of innovation continues to accelerate, we have to think even more about how to scale the project, remain agile and encourage experimentation. I think about these issues a lot, and am fortunate enough to work with some of the smartest people I know to build the best possible version of the web.

So, here is to another 15 years of evolution, reinvention, and continued growth. No one knows what the web will look like 15 years in the future, but we'll keep doing our best to guide Drupal responsibly.

Pages

Updates from Dries straight to your mailbox