Algorithms are shaping what we see and think -- even what our futures hold. The order of Google's search results, the people Twitter recommends us to follow, or the way Facebook filters our newsfeed can impact our perception of the world and drive our actions. But think about it: we have very little insight into how these algorithms work or what data is used. Given that algorithms guide much of our lives, how do we know that they don't have a bias, withhold information, or have bugs with negative consequences on individuals or society? This is a problem that we aren't talking about enough, and that we have to address in the next decade.
Open Sourcing software quality
In the past several weeks, Volkswagen's emissions crisis has raised new concerns around "cheating algorithms" and the overall need to validate the trustworthiness of companies. One of the many suggestions to solve this problem was to open-source the software around emissions and automobile safety testing (Dave Bollier's post about the dangers of proprietary software is particularly good). While open-sourcing alone will not fix software's accountability problems, it's certainly a good start.
As self-driving cars emerge, checks and balances on software quality will become even more important. Companies like Google and Tesla are the benchmarks of this next wave of automotive innovation, but all it will take is one safety incident to intensify the pressure on software versus human-driven cars. The idea of "autonomous things" has ignited a huge discussion around regulating artificially intelligent algorithms. Elon Musk went as far as stating that artificial intelligence is our biggest existential threat and donated millions to make artificial intelligence safer.
While making important algorithms available as Open Source does not guarantee security, it can only make the software more secure, not less. As Eric S. Raymond famously stated "given enough eyeballs, all bugs are shallow". When more people look at code, mistakes are corrected faster, and software gets stronger and more secure.
Less "Secret Sauce" please
Automobiles aside, there is possibly a larger scale, hidden controversy brewing on the web. Proprietary algorithms and data are big revenue generators for companies like Facebook and Google, whose services are used by billions of internet users around the world. With that type of reach, there is big potential for manipulation -- whether intentional or not.
There are many examples as to why. Recently Politico reported on Google's ability to influence presidential elections. Google can build bias into the results returned by its search engine, simply by tweaking its algorithm. As a result, certain candidates can display more prominently than others in search results. Research has shown that Google can shift voting preferences by 20 percent or more (up to 80 percent in certain groups), and potentially flip the margins of voting elections worldwide. The scary part is that none of these voters know what is happening.
And, when Facebook's 2014 "emotional contagion" mood manipulation study was exposed, people were outraged at the thought of being tested at the mercy of a secret algorithm. Researchers manipulated the news feeds of 689,003 users to see if more negative-appearing news led to an increase in negative posts (it did). Although the experiment was found to comply with the terms of service of Facebook's user agreement, there was a tremendous outcry around the ethics of manipulating people's moods with an algorithm.
In theory, providing greater transparency into algorithms using an Open Source approach could avoid a crisis. However, in practice, it's not very likely this shift will happen, since these companies profit from the use of these algorithms. A middle ground might be allowing regulatory organizations to periodically check the effects of these algorithms to determine whether they're causing society harm. It's not crazy to imagine that governments will require organizations to give others access to key parts of their data and algorithms.
Ethical early days
The explosion of software and data can either have horribly negative effects, or transformative positive effects. The key to the ethical use of algorithms is providing consumers, academics, governments and other organizations access to data and source code so they can study how and why their data is used, and why it matters. This could mean that despite the huge success and impact of Open Source and Open Data, we're still in the early days. There are few things about which I'm more convinced.
Today, we're excited to announce that Acquia has closed a $55 million financing round, bringing total investment in the company to $188.6 million. Led by new investor Centerview Capital Technology, the round includes existing investors New Enterprise Associates (NEA) and Split Rock Partners.
We are in the middle of a big technological and economic shift, driven by the web, in how large organizations and industries operate. At Acquia, we have set out to build the best platform for helping organizations run their businesses online, help them invent new ways of doing business, and maximize their digital impact on the world. What Acquia does is not at all easy -- or cheap -- but we've made good strides towards that vision. We have become the backbone for many of the world's most influential digital experiences and continue to grow fast. In the process, we are charting new territory with a very unique business model rooted Drupal and Open Source.
A fundraise like this helps us scale our global operations, sales and marketing as well as the development of our solutions for building, delivering and optimizing digital experiences. It also gives us flexibility. I'm proud of what we have accomplished so far, and I'm excited about the big opportunity ahead of us.
Of all the trends in the web community, few are spreading more rapidly than decoupled (or "headless") content management systems (CMS). The evolution from websites to more interactive web applications and the need for multi-channel publishing suggest that moving to a decoupled architecture that leverages client-side frameworks is a logical next step for Drupal. For the purposes of this blog post, the term decoupled refers to a separation between the back end and one or more front ends instead of the traditional notion of service-oriented architecture.
Traditional ("monolithic") versus fully decoupled ("headless") architectural paradigm.
Decoupling your content management system enables front-end developers to fully control an application or site's rendered markup and user experience. Furthermore, the use of client-side frameworks helps developers give websites application-like behavior with smoother interactivity (there is never a need to hit refresh, because new data appears automatically), optimistic feedback (a response appears before the server has processed a user's query), and non-blocking user interfaces (the user can continue to interact with the application while portions are still loading).
Another advantage of a decoupled architecture is decoupled teams. Since front-end and back-end developers no longer need to understand the full breadth and depth of a monolithic architecture, they can work independently. With the proper preparation, decoupled teams can improve the velocity of a project.
Still, it's important to temper the hype around decoupled content management with balanced analysis. Before decoupling, you need to ask yourself if you're ready to do without functionality usually provided for free by the CMS, such as layout and display management, content previews, user interface (UI) localization, form display, accessibility, authentication, crucial security features such as XSS (cross-site scripting) and CSRF (cross-site request forgery) protection, and last but not least, performance. Many of these have to be rewritten from scratch, or can't be implemented at all, on the client-side. For many projects, building a decoupled application or site on top of a CMS will result in a crippling loss of critical functionality or skyrocketing costs to rebuild missing features.
To be clear, I'm not arguing against decoupled architectures per se. Rather, I believe that decoupling needs to be motivated through ample research. Since it is still quite early in the evolution of this architectural paradigm, we need to be conscious about how and in what scenarios we decouple. In this blog post, we examine different decoupled architectures, discuss why a fully decoupled architecture is not ideal for all use cases, and present "progressive decoupling", a better approach for which Drupal 8 is well-equipped. Today, Drupal 8 is ready to deliver pages quickly and offers developers the flexibility to enhance user experience through decoupled interactive components.
Fully decoupled is not usually the best solution
In a fully decoupled architecture, the theme layer is often ignored altogether, and many content management capabilities are lost, though many clients ingesting data are possible.
Traditionally, CMSes have focused on making websites rather than web applications, but the line between them continues to blur. For example, let's imagine we are building a site delivering content-rich curated music reviews alongside an interaction-rich ticketing interface for live shows. In the past, this ticketing interface would have been a multi-step flow, but here we aim to keep visitors on the same page as they browse and purchase ticket options.
To write and organize reviews, beyond basic content management, we need site building tools to assemble content and lay it out on a page. On the other hand, for a pleasant ticket purchase experience, we need seamless interactivity. From the end user's perspective, this means a continuous, uninterrupted experience using the application and best of all, no need to refresh the page.
In a fully decoupled architecture, our losses from having to rebuild what Drupal gives us for free outweigh the wins from client-side frameworks. With a fully decoupled front end, we lose important aspects of the theme layer (which is tightly intertwined with Drupal APIs) such as theme hook suggestions, Twig templating, and, to varying degrees, the render pipeline. We lose content preview and nuances of creating, curating, and composing content such as layout management tools. We lose all of the advancements in accessibility and user experience that make Drupal 8 websites a great tool for both end users and site builders, like ARIA roles, improved system messages, and, most visibly, the fully integrated Drupal toolbar on non-administrative pages. Moreover, where there is elaborate interactivity, security vulnerabilities are easily introduced.
Progressive rendering with BigPipe
Fully decoupled architectures can also have a performance disadvantage. We want users to see the information they want as soon as possible (time to first paint) and be able to perform a desired action as soon as possible (time to interaction). A well-architected CMS will have the advantage over a client-side framework.
Much of the content we want to send for our music website is cacheable and mostly static, such as a navigation bar, recurring footer, and other unchanging components. Under a traditional page serving model, however, operations taking longer to execute can hold up simpler parts of the page in the pipeline and significantly delay the response to the client. In our music site example, this could be blocks containing songs a user listened to most in the last week and the song currently playing in a music player.
What we really need is a means of selectively processing those expensive components later and sending the less intensive bits earlier. With BigPipe, an approach for client-side dynamic content substitution, we can render our pages progressively, where the skeleton of the page loads first, then expensive components such as "songs I listened to most in the last week" or "currently playing" are sent to the browser later and fill out placeholders. This component-driven approach gives us the best of both worlds: non-blocking user interfaces with a brisk time to first interaction and rapid piecemeal loading of complete Drupal pages that leverage the theme layer.
Currently, Drupal 8 is the only CMS with BigPipe deeply integrated across the board for both core and contributed modules—they merely have to provide some cacheability metadata and need no awareness of the technical minutiae. Drupal 8's Dynamic Page Cache module ensures that the page skeleton is already cached and can thus be sent immediately. For example, as menus are identical for many users, we can reuse the menu for those users that can access the same menu links, so Dynamic Page Cache is able to cache those as part of the page skeleton. On the other hand, a personalized block with a user's most played songs is less effective to cache and will therefore be rendered after the page skeleton is sent. This cacheability is built into core, and every contributed module is required to provide the necessary metadata for it.
Client-side frameworks encounter some critical and inescapable drawbacks of conducting all rendering on the client side. On slow connections such as on mobile devices, client-side rendering slows down performance, depletes batteries faster, and forces the user to wait. Because most developers test locally and not in real-world network conditions on actual devices, it's easy to forget that real risks of sluggishness and unreliability, especially due to spotty connectivity, continue to confront any fully decoupled site — Drupal or otherwise.
Progressive decoupling is the future
Under progressive decoupling, the CMS renderer still outputs the skeleton of the page.
As we've seen, fully decoupling eliminates crucial CMS functionality and BigPipe rendering. But what if we could decouple while still having both? What if we could keep things like layout management, security, and accessibility when decoupling while still enjoying all the benefits an interaction-rich single-page application would give us? More importantly, what if we could take advantage of BigPipe to leverage shortened times to interaction and lowered obstacles for heavily personalized content? The answer lies in decoupled components, or progressive decoupling. Instead of decoupling the entire page, why not decouple only portions of it, like individual blocks?
This component-driven decoupled architecture comes with big benefits over a fully decoupled architecture. Namely, traditional content management and workflow, display and layout management are still available to site builders. To return to our music website example, we can drag a block containing the songs a user listened to most in the past week into an arbitrarily located region on the page; a front-end developer can then infuse interactivity such that an account holder can play a song from that list or add it to a favorites list on the fly. Meanwhile, if a content creator wants to move the "most listened to in the past week" block from the right sidebar of the page to the left side of the page, she can do that with a few mouse clicks, rather than having to get a front-end developer involved.
We call this approach progressive decoupling, because you decide how much of the page and which components to decouple from Drupal's front end or dedicated Drupal renderings. For this reason, progressively decoupled Drupal brings decoupling to the assembled web, something I'm very passionate about, because empowering site builders and administrators to build great websites without (much) programming is important. Drupal is uniquely capable for this, precisely because it allows for varying degrees of decoupledness.
Drupal 8 is ahead of the competition and is your go-to platform for building decoupled websites. It comes with a built-in REST API for all content, a system to query data from Drupal (e.g. REST exports in the Views module), and a BigPipe implementation. It will be even better once the full range of contributed modules dealing with REST is available to developers.
With Drupal 8, an exciting spectrum opens up beyond just the two extremes of fully decoupling and traditional Drupal. As we've seen, fully decoupling opens the door to a Drupal implementation providing content to a broad range of one or more clients ("many-headed" Drupal), such as mobile applications, interactive kiosks, and television sets. But progressive decoupling goes a step further, since you can fully decouple and progressively decouple a single Drupal site with all the tools to assemble content still intact.
What is next for decoupling in Drupal?
In the case of many-headed Drupal, fully decoupled applications can live alongside progressively decoupled pages, whose skeletons are rendered through the CMS.
Although Drupal has made huge strides in the last few years and is ahead of the competition, there is still work to do. Traditional Drupal, fully decoupled Drupal, and progressively decoupled Drupal can all coexist. With improved decoupling tools, Drupal can be an even better hub for many heads: a collection of applications and sites linked and supported by a single backend.
One of the most difficult issues facing front-end developers is network performance with REST. For example, a REST query fetching multiple content entities requires a round trip to the server for each individual entity, each of which may also depend on relational data such as referenced entities that require their own individual requests to the server. Then, to gather the required data, each REST query needs a corresponding back-end bootstrap, which can be quite hefty. Such a large stack of round trips and bootstraps can be prohibitively expensive.
Beyond performance and endpoint management, developer experience also suffers under a RESTfully decoupled Drupal architecture. For each individual request, there is a single corresponding schema for the response that is immutable. In order to retrieve differently formatted or filtered data according to your needs, you must create a second variant of a given endpoint or provision a new endpoint altogether. Front-end developers concerned about performance limitations desire full control from the client side over what schema comes back and want to avoid working with the server side.
Decoupling thus reveals the need to investigate better ways of exposing data to the client side. As our pages and applications become ever more component-driven, the complexity of the queries we must perform and our demands on their performance increase. What if we could extract only the data we need by writing queries that are efficient and performant by default? Sebastian Siemssen proposes using Facebook's GraphQL (see demo video and project on drupal.org) due to the client's explicit definition of what schema to return and the use of consolidated queries which break apart into smaller calls and recombine for the response, thereby minimizing round trips.
I like GraphQL's approach for both fully and progressively decoupled front-ends. It means that decoupled sites will enjoy better overall performance and give front-end developers a better experience: very few round trips to the server, no need for custom endpoints, no need for versioning, and no dependence on back-end developers. (I even wonder if GraphQL could be the basis for a future version of Views.)
In addition, work on rendering improvements such as BigPipe should continue in order to explore the possibilities given a more progressive rendering system. It's currently in progress to accelerate Drupal's perceived page loads where users consume expensive or personalized content. As for tools within the administration layer and in the contributed space, further progress in Drupal 8 is also necessary for layout management tools such as Panels and block placement that would make decoupling at a granular level much easier.
A great deal is being done, but we could always use more help; please get in touch if you're interested in contributing code or funding our work. After all, the potential impact of progressive rendering on the future of decoupled Drupal is huge.
Decoupled content management is a rapidly evolving trend that has the potential to upend existing architectural paradigms. Nonetheless, we need to be cautious when approaching decoupled projects due to the loss of functionality and performance.
With Drupal 8, we can use progressive rendering to build a Drupal-driven skeleton of the page to fill out increasingly expensive portions of the page. We can then selectively delineate which parts of the page decouple once the page load is complete. This concept of progressive decoupling offers the best of both worlds. Layout management, security, and content previews are unaffected, and within page components, front-end application logic can work its magic.
Drupal 8 is now a state-of-the-art platform for building projects that lie at the nexus between traditional content-rich websites and modern interaction-rich web applications. As always, while remarkable strides have been made with Drupal 8, more work remains.
Special thanks to Preston So and Wim Leers at Acquia for contributions to this blog post and to Moshe Weitzman, Kevin O'Leary, Christian Yates, David Hwang, Michael Pezzi and Erik Baldwin for their feedback during its writing.
Drupal will soon be 15 years old, and 5 of that will be spent on building Drupal 8 -- a third of Drupal's life. We started work on Drupal early in 2011 and targeted December 1, 2012 as the original code freeze date. Now almost three years later, we still haven't released Drupal 8. While we are close to the release of Drupal 8, I'm sure many many of you are wondering why it took 3 years to stabilize. It is not like we didn't work hard or that we aren't smart people. Quite the contrary, the Drupal community has some of the most dedicated, hardest working and smartest people I know. Many spent evenings and weekends pushing to get Drupal 8 across the finish line. No one individual or group is to blame for the delay -- except maybe me as the project lead for not having learned fast enough from previous release cycles.
The past 15 years we used "trunk-based development"; we built new features in incremental steps, maintained in a single branch called "trunk". We'd receive the feature's first patch, commit it to our main branch, put it behind us, and move on to the next patch. Trunk-based development requires a lot of discipline and as a community we have mostly mastered this style of development. We invested heavily in our testing infrastructure and established a lot of processes. For all patches, we had various people reviewing the work to make sure it was solid. We also had different people figure out and review all the required follow-up work to complete the feature. The next steps are carefully planned and laid out for everyone to see in what we call "meta" issues. The idea of splitting one large feature into smaller tasks is not a bad idea; it helps to make things manageable for everyone involved.
Given all this rigor, how do you explain the delays then? The problem is that once these features and plans meet reality, they fall apart. Some features such as Drupal 8's configuration management system had to be rewritten multiple times based on our experience using it despite passing a rigorous review process. Other features such as our work on URL routing, entity base fields and Twig templating required much more follow-up work compared to what was initially estimated. It turns out that breaking up a large task into smaller ones requires a lot of knowledge and vision. It's often impossible to estimate the total impact of a larger feature on other subsystems, overall performance, etc. In other cases, the people working on the feature lacked time or interest to do the follow-up work, leaving it to others to complete. We should realize is that this is how things work in a complex world and not something we are likely to change.
The real problem is the fact that our main branch isn't kept in a shippable state. A lot of patches get committed that require follow-up work, and until that follow-up work is completed, we can't release a new version of Drupal. We can only release as fast as the slowest feature, and this is the key reason why the Drupal 8 release is delayed by years.
Trunk-based development; all development is done on a single main branch and as a result we can only release as fast as the slowest feature.
We need a better way of working -- one that conforms to the realities of the world we live in -- and we need to start using it the day Drupal 8.0.0 is released. Instead of ignoring reality and killing ourselves trying to meet unrealistic release goals, we need to change the process.
Feature branch workflow
The most important thing we have to do is keep our main branch in a shippable state. In an ideal world, each commit or merge into the main branch gives birth to a release candidate — it should be safe to release after each commit. This means we have to stop committing patches that put our main branch in an unshippable state.
While this can be achieved using a trunk-based workflow, a newer and better workflow called "feature branch workflows" has become popular. The idea is that (1) each new feature is developed in its own branch instead of the main branch and that (2) the main branch only contains shippable code.
Keeping the main branch shippable at all times enables us to do frequent date-based releases. If a specific feature takes too long, development can continue in the feature branch, and we can release without it. Or when we are uncertain about a feature's robustness or performance, rather than delaying the release, it will simply have to wait until the next release. The maintainers decide to merge in a feature branch based on objective and subjective criteria. Objectively, the test suite must pass, the git history must be clean, etc. Subjectively, the feature must deliver value to the users while maintaining desirable characteristics like consistency (code, API, UX), high performance, etc.
Feature branching; each feature is developed in a dedicated branch. A feature branch is only merged into the main branch when it is "shippable". We no longer have to wait for the slowest feature before we can create a new release.
Date-based releases are widely adopted in the Open Source community (Ubuntu, OpenStack, Android) and are healthy for Open Source projects; they reduce the time it takes for a given feature to become available to the public. This encourages contribution and is in line with the "release early, release often" mantra. We agreed on the benefits and committed to date-based releases following 8.0.0, so this simply aligns the tooling to make it happen.
Feature branch workflows have challenges. Reviewing a feature branch late in its development cycle can be challenging. There is a lot of change and discussion already incorporated. When a feature does finally integrate into main, a lot of change hits all at once. This can be psychologically uncomfortable. In addition, this can be disruptive to the other feature branches in progress. There is no way to avoid this disruption - someone has to integrate first. Release managers minimize the disruption by prioritizing high priority or low disruption feature branches over others.
Here is a workflow that could give us the best of both worlds. We create a feature branch for each major feature and only core committers can commit to feature branches. A team working on a feature would work in a sandbox or submit patches like we do today. Instead of committing patches to the main branch, core committers would commit patches to the corresponding feature branch. This ensures that we maintain our code review process with smaller changes that might not be shippable in isolation. Once we believe a feature branch to be in a shippable state, and it has received sufficient testing, we merge the feature branch into the main branch. A merge like this wouldn't require detailed code review.
Feature branches are also not the silver bullet to all problems we encountered with the Drupal 8 release cycle. We should keep looking for improvements and build them into our workflows to make life easier for ourselves and those we are implementing Drupal for. More on those in future posts.
I believe that the "digitalization" of the world is a "megatrend" that will continue for decades. On the one hand, organizations are shifting their businesses online, often inventing new ways to do business. On the other hand, customers are expecting a better and smarter user experience online.
This has led to two important sub-trends: (1) the number of sites an organization is creating and managing is growing at a rapid clip, (2) so is the underlying complexity of each website.
Forrester Research recently surveyed large enterprises about their website portfolio and found that on average they manage 268 properties across various channels. On top of that, each website is becoming more and more advanced. They evolved from simple HTML pages to dynamic websites to digital experience platforms that need to integrate with many other business systems. The combination of these two trends -- increasing number of sites and the growing complexity of each site -- poses real challenges to most organizations.
At Acquia, we are seeing this explosion of websites in the enterprise every day. Many organizations have different websites for different brands and products, want different websites for each country or region they operate in, or offer separate portals for their affiliates, dealers, agents or franchises. We're also seeing organizations, small and large, operate a large number of marketing campaign websites. These organizations aren't focused on scaling back their online properties but rather how best to manage them over time.
I outlined this trend and its challenges almost five years ago (see Acquia product strategy and vision) and most of it is still relevant today, if not more relevant. In this blog post, I want to give you an update and share some lessons learned.
Most larger organizations run many different types of websites. It's not unusual for a small organization to have ten websites and for a large organization to have hundreds of websites. Some of Acquia's largest customers operate thousands of websites.
Most organizations struggle to manage their growing portfolio of digital properties. You'd be surprised how many organizations have more than 20 different content management systems in use. Often this means that different teams are responsible for them and that they are hosted on different hosting environments. It is expensive, creates unnecessary security risks, poses governance challenges, leads to brand inconsistency, makes it difficult to create a unified customer experience, and more. It costs large organizations millions of dollars a year.
Drupal's unfair advantage
When managing many sites, Drupal has an unfair advantage in that it scales from simple to complex easily. That scalability, coupled with a vast ecosystem of modules, elevate Drupal from a single site point solution to a platform on which you can build almost any kind of site: a brand site, a corporate website, a customer support community, a commerce website, an intranet, etc. You name it.
This is in contrast to many of Drupal's competitors that are either point solutions (e.g. SharePoint is mainly used for intranets) or whose complexity and cost don't lend themselves to managing many sites (e.g. Adobe Experience Manager and Sitecore are expensive solutions for a quick marketing campaign site, while WordPress can be challenging for building complex websites). So the first thing people can do is to standardize on Drupal as a platform for all of their site needs.
By standardizing on Drupal, organizations can simplify training, reduce maintenance costs, streamline security and optimize internal resources – all without sacrificing quality or requirements. Standardizing on Drupal certainly doesn't mean every single site needs to be on Drupal. Transitioning from 20 different systems to 3 still translates into dramatic cost savings.
The Acquia advantage
Once an organization decides to standardize on Drupal, the question is how best to manage all these sites? In 2013 we launched Acquia Cloud Site Factory (ACSF), a scalable enterprise-grade multi-site management platform that helps organizations to easily create, deploy and govern all their sites. Today, some of Acquia's biggest customers use ACSF to manage hundreds of sites - in fact on average an ACSF customer is currently managing 170 websites within their Site Factory platform and that number is growing rapidly.
Acquia commissioned Forrester Research to analyze the benefits to organizations who have unified their sites on a single platform. Forrester found that moving to a single platform dramatically reduced site development and support costs, conserved IT and marketing resources, and improved standardization, governance and scalability — all while accelerating time-to-market and the delivery of better digital experiences.
One of the things we've learned is that a complete multi-site management solution needs to include advanced tools for both developers and content managers. The following image illustrates the different layers of a complete multi-site management solution:
The different layers of the Acquia Cloud Site Factory solution stack.
Let's go through these individually from the bottom up.
Consider an organization that currently has 50 websites, and plans to add 10-15 more sites every year. With ACSF these sites run on a platform that is scalable, secure and highly reliable. This infrastructure also allows hardware resources to be logically isolated based on the site's needs as well as scaled up or down to meet any ad-hoc traffic spikes. These capabilities enable organizations to simplify multi-site management efforts and eliminate operational headaches.
If this organization with 50 sites had individual codebases for each site, that would be 50 disparate codebases to manage. With ACSF, the underlying code can be shared and managed in one central place but the content, configuration, and creative look-and-feel can be catered to each individual sites' needs. ACSF also enable developers to easily add or remove features from their codebases for individual sites. ACSF also comes with tools to automate the process of rolling out updates across all their sites.
Organizations with many sites also need efficient ways to manage and govern them effectively; from developer tools such as Git, Travis, or Behat that enable them to build, test and maintain sites, to tools for non-developers to quickly clone and spin up sites using site templates defined by a brand manager or a digital design team. ACSF enables customers to effortlessly manage all their sites from a single intuitive dashboard. Developers can create groups of users as well as sites allowing certain users to manage their dedicated domain of sites without stepping over other sites. Non-technical content managers can quickly spin up new sites by cloning existing ones they have access to and updating their configuration, content and look-and-feel. These features allow organizations to launch sites at unprecedented speed inherently improving their overall time to market.
Finally, I should mention personalization. For a few years now we have been developing Acquia Lift. Acquia Lift builds unified customer profiles across all your websites, and uses that information to deliver real-time, contextual, and personalized experiences. For instance, if the organization in the above example had 50 websites for each of their 50 different products, Acquia Lift can present relevant content to its users as they browse across these different sites. This enables organizations to convert anonymous site visitors into known customers and establish a meaningful engagement between them.
I believe that the "multi-sites era" will continue to accelerate; not only will we see more sites, but every site will become increasingly complex. Organizations need to think about how to efficiently manage their website portfolio. If you're not thinking ahead, you're falling behind.