How should you decouple Drupal?

With RESTful web services in Drupal 8 core, Drupal can function as an API-first back end serving browser applications, native applications on mobile devices, in-store displays, even in-flight entertainment systems (Lufthansa is doing so in Drupal 8!), and much more. When building a new website or web application in 2016, you may ask yourself: how should I decouple Drupal? Do I build my website with Drupal's built-in templating layer or do I use a JavaScript framework? Do I need Node.js?

There is a lot of hype around decoupled architectures, so before embarking on a project, it is important to make a balanced analysis. Your choice of architecture has implications on your budget, your team, time to launch, the flexibility for content creators, the ongoing maintenance of your website, and more. In this blog post, I'd like to share a flowchart that can help you decide when to use what technology.

Decoupled decision flowchart

This flowchart shows three things:

First, using coupled Drupal is a perfectly valid option for those who don't need extensive client-side rendering and state management. In this case, you would use Drupal's built-in Twig templating system rather than heavily relying on a JavaScript framework. You would use jQuery to take advantage of limited JavaScript where necessary. Also, with BigPipe in Drupal 8.1, certain use cases that typically needed asynchronous JavaScript can now be done in PHP without slowing down the page (i.e. communication with an external web service delaying the display of user-specific real-time data). The advantage of this approach is that content marketers are not blocked by front-end developers as they assemble their user experiences, thus shortening time to market and reducing investment in ongoing developer support.

Second, if you want all of the benefits of a JavaScript framework without completely bypassing Drupal's HTML generation and all that you get with it, I recommend using progressively decoupled Drupal. With progressive decoupling, you start with Drupal's HTML output, and then use a JavaScript framework to add client-side interactivity on the client side. One of the most visited sites in the world, The Weather Channel (100 million unique visitors per month), does precisely this with Angular 1 layered on top of Drupal 7. In this case, you can enjoy the benefits of having a “decoupled" team made up of both Drupal and JavaScript developers progressing at their own velocities. JavaScript developers can build richly interactive experiences while leaving content marketers free to assemble those experiences without needing a developer's involvement.

Third, whereas fully decoupled Drupal makes a lot of sense when building native applications, for most websites, the leap to fully decoupling is not strictly necessary, though a growing number of people prefer using JavaScript these days. Advantages include some level of independence on the underlying CMS, the ability to tap into a rich toolset around JavaScript (e.g. Babel, Webpack, etc.) and a community of JavaScript front-end professionals. But if you are using a universal JavaScript approach with Drupal, it's also important to consider the drawbacks: you need to ask yourself if you're ready to add more complexity to your technology stack and possibly forgo functionality provided by a more integrated content delivery system, such as layout and display management, user interface localization, and more. Losing that functionality can be costly, increase your dependence on a developer team, and hinder the end-to-end content assembly experience your marketing team expects, among other things.

It's worth noting that over time we are likely to see better integrations between Drupal and the different JavaScript frameworks (e.g. Drupal modules that export their configuration, and SDKs for different JavaScript frameworks that use that configuration on the client-side). When those integrations mature, I expect more people will move towards fully decoupled Drupal.

To be performant, fully decoupled websites using JavaScript employ Node.js on the server to improve initial performance, but in the case of Drupal this is not necessary, as Drupal can do the server-side pre-rendering for you. Many JavaScript developers opt to use Node.js for the convenience of shared rendering across server and client rather than for the specific things that Node.js excels in, like real-time push, concurrent connections, and bidirectional client-server communication. In other words, most Drupal websites don't need Node.js.

Decoupled delivery architectures

In practice, I believe many organizations want to use all of these content delivery options. In certain cases, you want to let your content management system render the experience so you can take full advantage of its features with minimal or no development effort (coupled architecture). But when you need to build a website that needs a much more interactive experience or that integrates with unique devices (i.e. on in-store touch screens), you should be able to use that same content management system's content API (decoupled architecture). Fortunately, Drupal allows you to use either. The beauty of choosing from the spectrum of fully decoupled Drupal, progressively decoupled Drupal, and coupled Drupal is that you can do what makes the most sense in each situation.

Special thanks to Preston So, Alex Bronstein and Wim Leers for their contributions to this blog post. We created at least 10 versions of this flowchart before settling on this one.

Breakfast at Acquia

Last week, Acquia's executive team prepared breakfast for the 300 or so employees in our Boston office. It was a lot of fun. A tradition that we started a few years ago to thank everyone for their efforts.

Acquia
Acquia
Acquia
Acquia
Acquia

Can we save the open web?

The web felt very different fifteen years ago, when I founded Drupal. Just 7 percent of the population had internet access, there were only around 20 million websites, and Google was a small, private company. Facebook, Twitter, and other household tech names were years away from being founded. In these early days, the web felt like a free space that belonged to everyone. No one company dominated as an access point or controlled what users saw. This is what I call the "open web".

But the internet has changed drastically over the last decade. It's become a more closed web. Rather than a decentralized and open landscape, many people today primarily interact with a handful of large platform companies online, such as Google or Facebook. To many users, Facebook and Google aren't part of the internet -- they are the internet.

I worry that some of these platforms will make us lose the original integrity and freedom of the open web. While the closed web has succeeded in ease-of-use and reach, it raises a lot of questions about how much control individuals have over their own experiences. And, as people generate data from more and more devices and interactions, this lack of control could get very personal, very quickly, without anyone's consent. So I've thought through a few potential ideas to bring back the good things about the open web. These ideas are by no means comprehensive; I believe we need to try a variety of approaches before we find one that really works.

Double-edged sword

It's undeniable that companies like Google and Facebook have made the web much easier to use and helped bring billions online. They've provided a forum for people to connect and share information, and they've had a huge impact on human rights and civil liberties. These are many things for which we should applaud them.

But their scale is also concerning. For example, Chinese messaging service Wechat (which is somewhat like Twitter) recently used its popularity to limit market choice. The company banned access to Uber to drive more business to their own ride-hailing service. Meanwhile, Facebook engineered limited web access in developing economies with its Free Basics service. Touted in India and other emerging markets as a solution to help underserved citizens come online, Free Basics allows viewers access to only a handful of pre-approved websites (including, of course, Facebook). India recently banned Free Basics and similar services, claiming that these restricted web offerings violated the essential rules of net neutrality.

Algorithmic oversight

Beyond market control, the algorithms powering these platforms can wade into murky waters. According to a recent study from the American Institute for Behavioral Research and Technology, information displayed in Google could shift voting preferences for undecided voters by 20 percent or more -- all without their knowledge. Considering how narrow the results of many elections can become, this margin is significant. In many ways, Google controls what information people see, and any bias, intentional or not, has a potential impact on society.

In the future, data and algorithms will power even more grave decisions. For example, code will decide whether a self-driving car stops for an oncoming bus or runs into pedestrians.

It's possible that we're reaching the point where we need oversight for consumer-facing algorithms. Perhaps it's time to consider creating an oversight committee. Similar to how the FDA monitors the quality and safety of food and drugs, this regulatory body could audit algorithms. Recently, I spoke at Harvard's Berkman Center for the Internet and Society, where attendees also suggested a global "Consumer Reports" style organization that would "review" the results of different company's algorithms, giving consumers more choice and transparency.

Gaining control of our personal data

But algorithmic oversight is not enough. In numbers by the billions, people are using free and convenient services, often without a clear understanding of how and where their data is being used. Many times, this data is shared and exchanged between services, to the point where people don't know what's safe anymore. It's an unfair trade-off.

I believe that consumers should have some level of control over how their data is shared with external sites and services; in fact, they should be able to opt into nearly everything they share if they want to. If a consumer wants to share her shoe size and color preferences with every shopping website, her experience with the web could become more personal, with her consent. Imagine a way to manage how our information is used across the entire web, not just within a single platform. That sort of power in the hands of the people could help the open web gain an edge on the hyper-personalized, easy-to-use "closed" web.

In order for a consumer-based, opt-in data sharing system described above to work, the entire web needs to unite around a series of common standards. This idea in and of itself is daunting, but some information-sharing standards like OAuth have shown us that it can be done. People want the web to be convenient and easy-to-use. Website creators want to be discovered. We need to find a way to match user preferences and desires with information throughout the open web. I believe that collaboration and open standards could be a great way to decentralize power and control on the web.

Why does this matter?

The web will only expand into more aspects of our lives. It will continue to change every industry, every company, and every life on the planet. The web we build today will be the foundation for generations to come. It's crucial we get this right. Do we want the experiences of the next billion web users to be defined by open values of transparency and choice, or the siloed and opaque convenience of the walled garden giants dominating today?

I believe we can achieve a balance between companies' ability to grow, profit and innovate, while still championing consumer privacy, freedom and choice. Thinking critically and acting now will ensure the web's open future for everyone.

(I originally wrote this blog post as a guest article for The Daily Dot. I also gave a talk yesterday at SXSW on a similar topic, and will share the slides along with a recording of my talk when it becomes available in a couple of weeks.)

A "MAP" for accelerating Drupal 8 adoption

Contributed modules in Drupal deliver the functionality and innovation proprietary content management solutions simply can't match. With every new version of Drupal comes the need to quickly move modules forward from the previous version. For users of Drupal, it's crucial to know they can depend on the availability of modules when considering a new Drupal 8 project or migrating from a previous version.

I'm pleased that many agencies and customers who use Drupal are donating time and attention to maintaining Drupal's module repository and ensuring their contributed modules are upgraded. I believe it's the responsibility of Drupal companies to give back to the community.

I'm proud that Acquia leads by example. It was with great pride that Acquia created a Drupal 8 Module Acceleration Program, or MAP. Led by Acquia's John Kennedy, MAP brings financial, technical and project management assistance to Drupal module maintainers. Acquia kicked off MAP in mid-October and to date we have helped complete production-ready versions of 34 modules. And it is not just any modules; we've been focused on those modules that provide critical pieces of functionality used by most Drupal sites.

When MAP was formed Acquia allocated $500,000 to fund non-Acquia maintainers in the community. In addition, we have so far invested more than 2,500 hours of our own developers' time to support the effort (the equivalent of three full-time developers).

What is impressive to me about MAP is both the focus on mission-critical modules that benefit a huge number of users, as well as the number of community members and agencies involved. John's team is leading a coalition of the best and brightest minds in the Drupal community to address the single biggest obstacle holding Drupal 8 adoption back.

Drupal 8 has already made a significant impact; in the 90 days following the release of Drupal 8.0.0, adoption has outpaced Drupal 7 by more than 200 percent. And as more modules get ported, I expect Drupal 8 adoption to accelerate even more.

White House deepens its commitment to Open Source

Yesterday, the White House announced a plan to deepen its commitment to open source. Under this plan, new, custom-developed government software must be made available for use across other federal agencies, and a portion of all projects must be made open source and shared with the public. This plan will make it much easier to share best practices, collaborate, and save money across different government departments.

However, there are still some questions to address. In good open source style, the White House is inviting developers to comment on this policy. As the Drupal community we should take advantage and comment on GitHub within the 30-day feedback window.

The White House has a long open source history with Drupal. In October 2009, WhiteHouse.gov relaunched on Drupal and shortly thereafter started to actively contribute back to Drupal -- both were a first in the history of the White House. White House's contributions to Drupal include the "We the People" petitions platform, which was adopted by other governments and organizations around the world.

This week's policy is big news because it will push open source deeper into the roots of the U.S. government, requiring more government agencies to become active open source contributors. We'll be able to solve problems faster and, together, build better software for citizens across the U.S.

I'm excited to see how this plays out in the coming months!

Pages

Dries Buytaert is the original creator and project lead of Drupal and the co-founder and CTO of Acquia. He writes about Drupal, startups, business, photography and building the world we want to exist in.

Updates from Dries straight to your mailbox