You are here

The rise of Drupal in India

Earlier this week I returned from DrupalCon Asia, which took place at IIT Bombay, one of India's premier engineering universities. I wish I could have bottled up all the energy and excitement to take home with me. From dancing on stage, to posing for what felt like a million selfies, to a motorcycle giveaway, this DrupalCon was unlike any I've seen before.

Drupalcon group photo
A little over 1,000 people attended the first DrupalCon in India. For 82% of the attendees, it was their first DrupalCon. There was also much better gender diversity than at other DrupalCons.

The excitement and interest around Drupal has been growing fast since I last visited in 2011. DrupalCamp attendance in both Delhi and Mumbai has exceeded 500 participants. There have also been DrupalCamps held in Hyderabad, Bangalore, Pune, Ahmedabad Jaipur, Srinagar, Kerala and other areas.

Indian Drupal companies like QED42, Axelerant, Srijan and ValueBound have made meaningful contributions to Drupal 8. The reason? Visibility on Drupal.org through the credit system helps them win deals and hire the best talent. ValueBound said it best when I spoke to them: "With our visibility on drupal.org, we no longer have to explain why we are a great place to work and that we are experts in Drupal.".

Also present were the large System Integrators (Wipro, TATA Consultancy Services, CapGemini, Accenture, MindTree, etc). TATA Consultancy Services has 400+ Drupalists in India, well ahead of the others who have between 100 and 200 Drupalists each. Large digital agencies such as Mirum and AKQA also sent people to DrupalCon. They are all expanding their Drupal teams in India to service the needs of growing sales in other offices around the world. The biggest challenge across the board? Finding Drupal talent. I was told that TATA Consultancy Services allows many of its developers to contribute back to Drupal, which is why they have been able to hire faster. More evidence that the credit system is working in India.

The government is quickly adopting Drupal. MyGov.in is one of many great examples; this portal was established by India's central government to promote citizen participation in government affairs. The site reached nearly two million registered users in less than a year. The government's shifting attitude toward open source is a big deal because historically, the Indian government has pushed back against open source because large organizations like Microsoft were funding many of the educational programs in India. The tide changed in 2015 when the Indian government announced that open source software should be preferred over proprietary software for all e-government projects. Needless to say, this is great news for Drupal.

Another initiative that stood out was the Drupal Campus Ambassador Program. The aim of this program is to appoint Drupal ambassadors in every university in India to introduce more students to Drupal and help them with their job search. It is early days for the program, but I recommend we pay attention to it, and consider scaling it out globally if successful.

Last but not least there was FOSSEE (Free and Open Source Software for Education), a government-funded program that promotes open source software in academic institutions, along with its sister project, Spoken Tutorial. To date, 2,500 colleges participate in the program and more than 1 million students have been trained on open source software. With the spoken part of their videos translated into 22 local languages, students gain the ability to self-study and foster their education outside of the classroom. I was excited to hear that FOSSEE plans to add a Spoken Tutorial series on Drupal course to its offerings. There is a strong demand for affordable Drupal training and certifications throughout India's technical colleges, so the idea of encouraging millions of Indian students to take a free Drupal course is very exciting -- even if only 1% of them decides to contribute back this could be a total game changer.

Open source makes a lot of sense for India's thriving tech community. It is difficult to grasp the size of the opportunity for Drupal in India and how fast its adoption has been growing. I have a feeling I will be back in India more than once to help support this growing commitment to Drupal and open source.

A history of JavaScript across the stack

Did you know that JavaScript was created in 10 days? In May 1995, Brendan Eich wrote the first version of JavaScript in 10 days while working at Netscape.

For the first 10 years of JavaScript's life, professional programmers denigrated JavaScript because its target audience consisted of "amateurs". That changed in 2004 with the launch of Gmail. Gmail was the first popular web application that really showed off what was possible with client-side JavaScript. Competing e-mail services such as Yahoo! Mail and Hotmail featured extremely slow interfaces that used server-side rendering almost exclusively, with almost every action by the user requiring the server to reload the entire web page. Gmail began to work around these limitations by using XMLHttpRequest for asynchronous data retrieval from the server. Gmail's use of JavaScript caught the attention of developers around the world. Today, Gmail is the classic example of a single-page JavaScript app; it can respond immediately to user interactions and no longer needs to make roundtrips to the server just to render a new page.

A year later in 2005, Google launched Google Maps, which used the same technology as Gmail to transform online maps into an interactive experience. With Google Maps, Google was also the first large company to offer a JavaScript API for one of their services allowing developers to integrate Google Maps into their websites.

Google's XMLHttpRequest approach in Gmail and Google Maps ultimately came to be called Ajax (originally "Asynchronous JavaScript and XML"). Ajax described a set of technologies, of which JavaScript was the backbone, used to create web applications where data can be loaded in the background, avoiding the need for full page refreshes. This resulted in a renaissance period of JavaScript usage spearheaded by open source libraries and the communities that formed around them, with libraries such as Prototype, jQuery, Dojo and Mootools. (We added jQuery to Drupal core as early as 2006.)

In 2008, Google launched Chrome with a faster JavaScript engine called V8. The release announcement read: "We also built a more powerful JavaScript engine, V8, to power the next generation of web applications that aren't even possible in today's browsers.". At the launch, V8 improved JavaScript performance by 10x over Internet Explorer by compiling JavaScript code to native machine code before executing it. This caught my attention because I had recently finished my PhD thesis on the topic of JIT compilation. More importantly, this marked the beginning of different browsers competing on JavaScript performance, which helped drive JavaScript's adoption.

In 2010, Twitter made a move unprecedented in JavaScript's history. For the Twitter.com redesign in 2010, they began implementing a new architecture where substantial amounts of server-side code and client-side code were built almost entirely in JavaScript. On the server side, they built an API server that offered a single set of endpoints for their desktop website, their mobile website, their native apps for iPhone, iPad, and Android, and every third-party application. As a result, they moved much of the UI rendering and corresponding logic to the user's browser. A JavaScript-based client fetches the data from the API server and renders the Twitter.com experience.

Unfortunately, the redesign caused severe performance problems, particularly on mobile devices. Lots of JavaScript had to be downloaded, parsed and executed by the user's browser before anything of substance was visible. The "time to first interaction" was poor. Twitter's new architecture broke new ground by offering a number of advantages over a more traditional approach, but it lacked support for various optimizations available only on the server.

Twitter suffered from these performance problems for almost two years. Finally in 2012, Twitter reversed course by passing more of the rendering back to the server. The revised architecture renders the initial pages on the server, but asynchronously bootstraps a new modular JavaScript application to provide the fully-featured interactive experience their users expect. The user's browser runs no JavaScript at all until after the initial content, rendered on the server, is visible. By using server-side rendering, the client-side JavaScript could be minimized; fewer lines of code meant a smaller payload to send over the wire and less code to parse and execute. This new hybrid architecture reduced Twitter's page load time by 80%!

In 2013, Airbnb was the first to use Node.js to provide isomorphic (also called universal or simply shared) JavaScript. In the Node.js approach, the same framework is identically executed on the server side and client side. On the server side, it provides an initial render of the page, and data could be provided through Node.js or through REST API calls. On the client side, the framework binds to DOM elements, "rehydrates" (updates the initial server-side render provided by Node.js) the HTML, and makes asynchronous REST API calls whenever updated data is needed.

The biggest advantage Airbnb's JavaScript isomorphism had over Twitter's approach is the notion of a completely reusable rendering system. Because the client-side framework is executed the same way on both server and client, rendering becomes much more manageable and debuggable in that the primary distinction between the server-side and client-side renders is not the language or templating system used, but rather what data is provisioned by the server and how.

Universal Javascript
In a universal JavaScript approach utilizing shared rendering, Node.js executes a framework (in this case Angular), which then renders an initial application state in HTML. This initial state is passed to the client side, which also loads the framework to provide further client-side rendering that is necessary, particularly to “rehydrate” or update the server-side render.

From a prototype written in 10 days to being used across the stack by some of the largest websites in the world, long gone are the days of clunky browser implementations whose APIs changed depending on whether you were using Netscape or Internet Explorer. It took JavaScript 20 years, but it is finally considered an equal partner to traditional, well-established server-side languages.

When traffic skyrockets your site shouldn't go down

This week's Grammy Awards is one of the best examples of the high traffic events websites that Acquia is so well known for. This marks the fourth time we hosted the Grammys' website. We saw close to 5 million unique visitors requesting nearly 20 million pages on the day of the awards and the day after. From television's Emmys to Superbowl advertisers' sites, Acquia has earned its reputation for keeping their Drupal sites humming during the most crushing peaks of traffic.

These "super spikes" aren't always fun. For the developers building these sites to the producers updating each site during the event, nothing compares to the sinking feeling when a site fails when it is needed the most. During the recent Superbowl, one half-time performer lost her website (not on Drupal), giving fans the dreaded 503 Service Unavailable error message. According to CMSWire: "Her website was down well past midnight for those who wanted to try and score tickets for her tour, announced just after her halftime show performance". Yet for Bruno Mars' fans, his Acquia-based Drupal site kept rolling even as millions flooded his site during the half-time performance.

For the Grammys, we can plan ahead and expand their infrastructure prior to the event. This is easy thanks to Acquia Cloud's elastic platform capacity. Our technical account managers and support teams work with the producers at the Grammys to make sure the right infrastructure and configuration is in place. Specifically, we simulate award night traffic as best we can, and use load testing to prepare the infrastructure accordingly. If needed, we add additional server capacity during the event itself. Just prior to the event, Acquia takes a 360 degree look at the site to ensure that all of the stakeholders are aligned, whether internal to Acquia or external at a partner. We have technical staff on site during the event, and remote teams that provide around the clock coverage before and after the event.

Few people know what goes on behind the scenes during these super spikes, but the biggest source of pride is that our work is often invisible; our job well done means that our customer's best day, didn't turn into their worst day.

In memoriam: Richard Burford

It is with great sadness that we learned last week that Richard Burford has passed away. This is a tragic loss for his family, for Acquia, the Drupal community, and the broader open source world. Richard was a Sr. Software Engineer at Acquia for three and a half years (I still remember him interviewing with me), and known as psynaptic in the Drupal community. Richard has been a member of the Drupal community for 9+ years. During that time, he contributed hundreds of patches across multiple projects, started a Drupal user group in his area and helped drive the Drupal community in the UK where he lived. Richard was a great person, a dedicated and hard-working colleague, a generous contributor to Drupal, and a friend. Richard was 36 years young with a wife and 3 children. He was the sole income earner for the family so a fundraising campaign has been started to help out his family during these difficult times; please consider contributing.

Turning Drupal outside-in

There has been a lot of discussion around the future of the Drupal front end both on Drupal.org (#2645250, #2645666, #2651660, #2655556) and on my blog posts about the future of decoupled Drupal, why a standard framework in core is a good idea, and the process of evaluating frameworks. These all relate to my concept of "progressive decoupling", in which some portions of the page are handed over to client-side logic after Drupal renders the initial page (not to be confused with "full decoupling").

My blog posts have drawn a variety of reactions. Members of the Drupal community, including Lewis Nyman, Théodore Biadala and Campbell Vertesi, have written blog posts with their opinions, as well as Ed Faulkner of the Ember community. Last but not least, in response to my last blog post, Google changed Angular 2's license from Apache to MIT for better compatibility with Drupal. I read all the posts and comments with great interest and wanted to thank everyone for all the feedback; the open discussion around this is nothing short of amazing. This is exactly what I hoped for: community members from around the world brainstorming about the proposal based on their experience, because only with the combined constructive criticism will we arrive at the best solution possible.

Based on the discussion, rather than selecting a client-side JavaScript framework for progressive decoupling right now, I believe the overarching question the community wants to answer first is: How do we keep Drupal relevant and widen Drupal's adoption by improving the user experience (UX)?

Improving Drupal's user experience is a topic near and dear to my heart. Drupal's user experience challenges led to my invitation to Mark Boulton to redesign Drupal 7, the creation of the Spark initiative to improve the authoring experience for Drupal 8, and continued support for usability-related initiatives. In fact, the impetus behind progressive decoupling and adopting a client-side framework is the need to improve Drupal's user experience.

It took me a bit longer than planned, but I wanted to take the time to address some of the concerns and share more of my thoughts about improving Drupal's UX (and JavaScript frameworks).

To iterate or to disrupt?

In his post, Lewis writes that the issues facing Drupal's UX "go far deeper than code" and that many of the biggest problems found during the Drupal 8 usability study last year are not resolved with a JavaScript framework. This is true; the results of the Drupal 8 usability study show that Drupal can confuse users with its complex mental models and terminology, but it also shows how modern features like real-time previews and in-page block insertion are increasingly assumed to be available.

To date, much of our UX improvements have been based on an iterative process, meaning it converges on a more refined end state by removing problems in the current state. However, we also require disruptive thinking, which is about introducing entirely new ideas, for true innovation to happen. It's essentially removing all constraints and imagining what an ideal result would look like.

I think we need to recognize that while some of the documented usability problems coming out of the Drupal 8 usability study can be addressed by making incremental changes to Drupal's user experience (e.g. our terminology), other well-known usability problems most likely require a more disruptive approach (e.g. our complex mental model). I also believe that we must acknowledge that disruptive improvements are possibly more impactful in keeping Drupal relevant and widening Drupal's adoption.

At this point, to get ahead and lead, I believe we have to do both. We have to iterate and disrupt.

From inside-out to outside-in

Let's forget about Drupal for a second and observe the world around us. Think of all the web applications you use on a regular basis, and consider the interaction patterns you find in them. In popular applications like Slack, the user can perform any number of operations to edit preferences (such as color scheme) and modify content (such as in-place editing) without incurring a single full page refresh. Many elements of the page can be changed without the user's flow being interrupted. Another example is Trello, in which users can create new lists on the fly and then add cards to them without ever having to wait for a server response.

Contrast this with Drupal's approach, where any complex operation requires the user to have detailed prior knowledge about the system. In our current mental model, everything begins in the administration layer at the most granular level and requires an unmapped process of bottom-up assembly. A user has to make a content type, add fields, create some content, configure a view mode, build a view, and possibly make the view the front page. If each individual step is already this involved, consider how much more difficult it becomes to traverse them in the right order to finally see an end result. While very powerful, the problem is that Drupal's current model is "inside-out". This is why it would be disruptive to move Drupal towards an "outside-in" mental model. In this model, I should be able to start entering content, click anything on the page, seamlessly edit any aspect of its configuration in-place, and see the change take effect immediately.

Drupal 8's in-place editing feature is actually a good start at this; it enables the user to edit what they see without an interrupted workflow, with faster previews and without needing to find what thing it is before they can start editing.

Making it real with content modeling

Eight years ago in 2007, I wrote about a database product called DabbleDB. I shared my belief that it was important to move CCK and Views into Drupal's core and learn from DabbleDB's integrated approach. DabbleDB was acquired by Twitter in 2010 but you can still find an eight-year-old demo video on YouTube. While the focus of DabbleDB is different, and the UX is obsolete, there is still a lot we can learn from it today: (1) it shows a more integrated experience between content creation, content modeling, and creating views of content, (2) it takes more of an outside-in approach, (3) it uses a lot less intimidating terminology while offering very powerful capabilities, and (4) it uses a lot of in-place editing. At a minimum, DabbleDB could give us some inspiration for what a better, integrated content modeling experience could look like, with the caveat that the UX should be as effortless as possible to match modern standards.

Other new data modeling approaches with compelling user experiences have recently entered the landscape. These include back end-as-a-service (BEaaS) solutions such as Backand, which provides a visually clean drag-and-drop interface for data modeling and helpful features for JavaScript application developers. Our use cases are not quite the same, but Drupal would benefit immensely from a holistic experience for content modeling and content views that incorporates both the rich feature set of DabbleDB and the intuitive UX of Backand.

This sort of vision was not possible in 2007 when CCK was a contributed module for Drupal 6. It still wasn't possible in Drupal 7 when Views existed as a separate contributed module. But now that both CCK and Views are in Drupal 8 core, we can finally start to think about how we can more deeply integrate the two. This kind of integration would be nontrivial but could dramatically simplify Drupal's UX. This should be really exciting because so many people are attracted to Drupal exactly because of features like CCK and Views. Taking an integrated approach like DabbleDB, paired with a seamless and easy-to-use experience like Slack, Trello and Backand, is exactly the kind of disruptive thinking we should do.

What most of the examples above have in common are in-place editing, immediate previews, no page refreshes, and non-blocking workflows. The implications on our form and render systems of providing configuration changes directly on the rendered page are significant. To achieve this requires us to have robust state management and rendering on the client side as well as the server side. In my vision, Twig will provide structure for the overall page and non-interactive portions, but more JavaScript will more than likely be necessary for certain parts of the page in order to achieve the UX that all users of the web have come to expect.

We shouldn't limit ourselves to this one example, as there are a multitude of Drupal interfaces that could all benefit from both big and small changes. We all want to improve Drupal's user experience — and we have to. To do so, we have to constantly iterate and disrupt. I hope we can all collaborate on figuring out what that looks like.

Special thanks to Preston So and Kevin O'Leary for contributions to this blog post and to Wim Leers for feedback.

Pages

Updates from Dries straight to your mailbox