You are here

From B2C to B2One

The Industrial Revolution, started in the middle of the 18th century, transformed the world. It marks the start of a major turning point in history that would influence almost every aspect of daily life. The Industrial Revolution meant the shift from handmade to machine-made products and increased productivity and capacity. Technological change also enabled the growth of capitalism. Factory owners and others who controlled the means of production rapidly became very rich and working conditions in the factories were often less than satisfactory. It wasn't until the 20th century, 150 years after its beginning, that the Industrial Revolution ended creating a much higher standard of living than had ever been known in the pre-industrial world. Consumers benefited from falling prices for clothing and household goods. The impact on natural resources, public health, energy, medicine, housing and sanitation meant that chronic hunger, famines and malnutrition started to disappear and the life expectancy started to increase dramatically.

An undesired side-effect of the Industrial Revolution is that instead of utilizing artisans to produce hand-made items, machines started to take the place of the artisans. Before the industrial revolution, custom-made goods and services were the norm. The one-on-one relationships that guilds had with their customers sadly got lost in an era of mass-production. But what is exciting me about the world today is that we're on the verge of being able to bring back one-on-one relationships with our customers, while maintaining increased productivity and capacity.

As the Big Reverse of the Web plays out and information and services are starting to come to us, we'll see the rise of a new trend I call "B2One". We're starting to hear a lot of buzz around personalization, as evidenced by companies like The New York Times making delivery of personalized content a core part of their business strategy. Another recent example is Facebook testing shopping concepts, letting users browse a personal feed of clothing and other items based on their "likes". I'd imagine these types of feeds could get smarter and smarter, refining themselves over time as a user browses or buys. Or just yesterday, Facebook launched Notify, an iOS app that pushes you personalized notifications from up to 70 sites.

These recent examples are early signs of how we're evolving from B2C to B2One (or from B2B2C to B2B2One), a world where all companies have a one-on-one relationship with their customers and personalized experiences will become the norm. Advances in technology allow us to get back what we lost hundreds of years ago in the Industrial Revolution, which in turn enables the world to innovate on business models. The B2One paradigm will be a very dramatic shift that disrupts existing business models (advertising, search engines, online and offline retailers) and every single industry.

For example, an athletic apparel company such as Nike could work sensor technology into its shoes, telling you once you've run a certain number of miles and worn them out. Nike would have enough of a one-on-one relationship with you to push an alert to your smartphone or smartwatch with a "buy" button for new shoes, before you even knew you needed them. This interaction is a win-win for both you and Nike; you don't need to re-enter your sizing and information into a website, and Nike gets a sale directly from you disrupting both the traditional and online retail supply chain (basically, this is bad news for intermediaries like Amazon, Zappos, clothing malls, Google, etc).

I believe strongly in the need for data-driven personalization to create smarter, pro-active digital experiences that bring back one-on-one relationships between producers and consumers. We have to dramatically improve delivering these personal one-on-one interactions. It means we have to get better at understanding the user's journey, the user's context, matching the right information/service to the user and making technology disappear in the background.

The coming era of data and software transparency

Algorithms are shaping what we see and think -- even what our futures hold. The order of Google's search results, the people Twitter recommends us to follow, or the way Facebook filters our newsfeed can impact our perception of the world and drive our actions. But think about it: we have very little insight into how these algorithms work or what data is used. Given that algorithms guide much of our lives, how do we know that they don't have a bias, withhold information, or have bugs with negative consequences on individuals or society? This is a problem that we aren't talking about enough, and that we have to address in the next decade.

Open Sourcing software quality

In the past several weeks, Volkswagen's emissions crisis has raised new concerns around "cheating algorithms" and the overall need to validate the trustworthiness of companies. One of the many suggestions to solve this problem was to open-source the software around emissions and automobile safety testing (Dave Bollier's post about the dangers of proprietary software is particularly good). While open-sourcing alone will not fix software's accountability problems, it's certainly a good start.

As self-driving cars emerge, checks and balances on software quality will become even more important. Companies like Google and Tesla are the benchmarks of this next wave of automotive innovation, but all it will take is one safety incident to intensify the pressure on software versus human-driven cars. The idea of "autonomous things" has ignited a huge discussion around regulating artificially intelligent algorithms. Elon Musk went as far as stating that artificial intelligence is our biggest existential threat and donated millions to make artificial intelligence safer.

While making important algorithms available as Open Source does not guarantee security, it can only make the software more secure, not less. As Eric S. Raymond famously stated "given enough eyeballs, all bugs are shallow". When more people look at code, mistakes are corrected faster, and software gets stronger and more secure.

Less "Secret Sauce" please

Automobiles aside, there is possibly a larger scale, hidden controversy brewing on the web. Proprietary algorithms and data are big revenue generators for companies like Facebook and Google, whose services are used by billions of internet users around the world. With that type of reach, there is big potential for manipulation -- whether intentional or not.

There are many examples as to why. Recently Politico reported on Google's ability to influence presidential elections. Google can build bias into the results returned by its search engine, simply by tweaking its algorithm. As a result, certain candidates can display more prominently than others in search results. Research has shown that Google can shift voting preferences by 20 percent or more (up to 80 percent in certain groups), and potentially flip the margins of voting elections worldwide. The scary part is that none of these voters know what is happening.

And, when Facebook's 2014 "emotional contagion" mood manipulation study was exposed, people were outraged at the thought of being tested at the mercy of a secret algorithm. Researchers manipulated the news feeds of 689,003 users to see if more negative-appearing news led to an increase in negative posts (it did). Although the experiment was found to comply with the terms of service of Facebook's user agreement, there was a tremendous outcry around the ethics of manipulating people's moods with an algorithm.

In theory, providing greater transparency into algorithms using an Open Source approach could avoid a crisis. However, in practice, it's not very likely this shift will happen, since these companies profit from the use of these algorithms. A middle ground might be allowing regulatory organizations to periodically check the effects of these algorithms to determine whether they're causing society harm. It's not crazy to imagine that governments will require organizations to give others access to key parts of their data and algorithms.

Ethical early days

The explosion of software and data can either have horribly negative effects, or transformative positive effects. The key to the ethical use of algorithms is providing consumers, academics, governments and other organizations access to data and source code so they can study how and why their data is used, and why it matters. This could mean that despite the huge success and impact of Open Source and Open Data, we're still in the early days. There are few things about which I'm more convinced.

The future of decoupled Drupal

Of all the trends in the web community, few are spreading more rapidly than decoupled (or "headless") content management systems (CMS). The evolution from websites to more interactive web applications and the need for multi-channel publishing suggest that moving to a decoupled architecture that leverages client-side frameworks is a logical next step for Drupal. For the purposes of this blog post, the term decoupled refers to a separation between the back end and one or more front ends instead of the traditional notion of service-oriented architecture.

Traditional vs decoupled

Traditional ("monolithic") versus fully decoupled ("headless") architectural paradigm.

Decoupling your content management system enables front-end developers to fully control an application or site's rendered markup and user experience. Furthermore, the use of client-side frameworks helps developers give websites application-like behavior with smoother interactivity (there is never a need to hit refresh, because new data appears automatically), optimistic feedback (a response appears before the server has processed a user's query), and non-blocking user interfaces (the user can continue to interact with the application while portions are still loading).

Another advantage of a decoupled architecture is decoupled teams. Since front-end and back-end developers no longer need to understand the full breadth and depth of a monolithic architecture, they can work independently. With the proper preparation, decoupled teams can improve the velocity of a project.

Still, it's important to temper the hype around decoupled content management with balanced analysis. Before decoupling, you need to ask yourself if you're ready to do without functionality usually provided for free by the CMS, such as layout and display management, content previews, user interface (UI) localization, form display, accessibility, authentication, crucial security features such as XSS (cross-site scripting) and CSRF (cross-site request forgery) protection, and last but not least, performance. Many of these have to be rewritten from scratch, or can't be implemented at all, on the client-side. For many projects, building a decoupled application or site on top of a CMS will result in a crippling loss of critical functionality or skyrocketing costs to rebuild missing features.

To be clear, I'm not arguing against decoupled architectures per se. Rather, I believe that decoupling needs to be motivated through ample research. Since it is still quite early in the evolution of this architectural paradigm, we need to be conscious about how and in what scenarios we decouple. In this blog post, we examine different decoupled architectures, discuss why a fully decoupled architecture is not ideal for all use cases, and present "progressive decoupling", a better approach for which Drupal 8 is well-equipped. Today, Drupal 8 is ready to deliver pages quickly and offers developers the flexibility to enhance user experience through decoupled interactive components.

Fully decoupled is not usually the best solution

Traditional vs fully decoupled

In a fully decoupled architecture, the theme layer is often ignored altogether, and many content management capabilities are lost, though many clients ingesting data are possible.

Traditionally, CMSes have focused on making websites rather than web applications, but the line between them continues to blur. For example, let's imagine we are building a site delivering content-rich curated music reviews alongside an interaction-rich ticketing interface for live shows. In the past, this ticketing interface would have been a multi-step flow, but here we aim to keep visitors on the same page as they browse and purchase ticket options.

To write and organize reviews, beyond basic content management, we need site building tools to assemble content and lay it out on a page. On the other hand, for a pleasant ticket purchase experience, we need seamless interactivity. From the end user's perspective, this means a continuous, uninterrupted experience using the application and best of all, no need to refresh the page.

Using Drupal to build a content-rich site delivering music reviews is very easy thanks to its content types and extensive editorial features. We can list, categorize, and write reviews by employing the user interfaces provided by Drupal. But because of Drupal's emphasis on server-side rendering rather than client-side rendering, Drupal alone does not yet satisfy our envisioned user experience. Meanwhile, off-the-shelf JavaScript MVC frameworks and native application frameworks are ill-suited for managing large amounts of content, due to the need to rebuild various content management and site building tools, and they can pay a performance penalty.

In a fully decoupled architecture, our losses from having to rebuild what Drupal gives us for free outweigh the wins from client-side frameworks. With a fully decoupled front end, we lose important aspects of the theme layer (which is tightly intertwined with Drupal APIs) such as theme hook suggestions, Twig templating, and, to varying degrees, the render pipeline. We lose content preview and nuances of creating, curating, and composing content such as layout management tools. We lose all of the advancements in accessibility and user experience that make Drupal 8 websites a great tool for both end users and site builders, like ARIA roles, improved system messages, and, most visibly, the fully integrated Drupal toolbar on non-administrative pages. Moreover, where there is elaborate interactivity, security vulnerabilities are easily introduced.

Progressive rendering with BigPipe

Fully decoupled architectures can also have a performance disadvantage. We want users to see the information they want as soon as possible (time to first paint) and be able to perform a desired action as soon as possible (time to interaction). A well-architected CMS will have the advantage over a client-side framework.

Much of the content we want to send for our music website is cacheable and mostly static, such as a navigation bar, recurring footer, and other unchanging components. Under a traditional page serving model, however, operations taking longer to execute can hold up simpler parts of the page in the pipeline and significantly delay the response to the client. In our music site example, this could be blocks containing songs a user listened to most in the last week and the song currently playing in a music player.

What we really need is a means of selectively processing those expensive components later and sending the less intensive bits earlier. With BigPipe, an approach for client-side dynamic content substitution, we can render our pages progressively, where the skeleton of the page loads first, then expensive components such as "songs I listened to most in the last week" or "currently playing" are sent to the browser later and fill out placeholders. This component-driven approach gives us the best of both worlds: non-blocking user interfaces with a brisk time to first interaction and rapid piecemeal loading of complete Drupal pages that leverage the theme layer.

Currently, Drupal 8 is the only CMS with BigPipe deeply integrated across the board for both core and contributed modules—they merely have to provide some cacheability metadata and need no awareness of the technical minutiae. Drupal 8's Dynamic Page Cache module ensures that the page skeleton is already cached and can thus be sent immediately. For example, as menus are identical for many users, we can reuse the menu for those users that can access the same menu links, so Dynamic Page Cache is able to cache those as part of the page skeleton. On the other hand, a personalized block with a user's most played songs is less effective to cache and will therefore be rendered after the page skeleton is sent. This cacheability is built into core, and every contributed module is required to provide the necessary metadata for it.

For a fully decoupled site to load more rapidly than a highly personalized Drupal 8 site using BigPipe, you would need to reconstruct a great deal of Drupal's smart caching logic, store the cache on the client, and continuously synchronize the client cache with server data. In addition, parsing and executing JavaScript to generate HTML takes longer than simply downloading HTML, especially on mobile devices. As a result, you will need extremely well-tuned JavaScript to overcome this challenge.

Client-side frameworks encounter some critical and inescapable drawbacks of conducting all rendering on the client side. On slow connections such as on mobile devices, client-side rendering slows down performance, depletes batteries faster, and forces the user to wait. Because most developers test locally and not in real-world network conditions on actual devices, it's easy to forget that real risks of sluggishness and unreliability, especially due to spotty connectivity, continue to confront any fully decoupled site — Drupal or otherwise.

Progressive decoupling is the future

Progressive decoupling

Under progressive decoupling, the CMS renderer still outputs the skeleton of the page.

As we've seen, fully decoupling eliminates crucial CMS functionality and BigPipe rendering. But what if we could decouple while still having both? What if we could keep things like layout management, security, and accessibility when decoupling while still enjoying all the benefits an interaction-rich single-page application would give us? More importantly, what if we could take advantage of BigPipe to leverage shortened times to interaction and lowered obstacles for heavily personalized content? The answer lies in decoupled components, or progressive decoupling. Instead of decoupling the entire page, why not decouple only portions of it, like individual blocks?

This component-driven decoupled architecture comes with big benefits over a fully decoupled architecture. Namely, traditional content management and workflow, display and layout management are still available to site builders. To return to our music website example, we can drag a block containing the songs a user listened to most in the past week into an arbitrarily located region on the page; a front-end developer can then infuse interactivity such that an account holder can play a song from that list or add it to a favorites list on the fly. Meanwhile, if a content creator wants to move the "most listened to in the past week" block from the right sidebar of the page to the left side of the page, she can do that with a few mouse clicks, rather than having to get a front-end developer involved.

We call this approach progressive decoupling, because you decide how much of the page and which components to decouple from Drupal's front end or dedicated Drupal renderings. For this reason, progressively decoupled Drupal brings decoupling to the assembled web, something I'm very passionate about, because empowering site builders and administrators to build great websites without (much) programming is important. Drupal is uniquely capable for this, precisely because it allows for varying degrees of decoupledness.

Drupal 8 is ahead of the competition and is your go-to platform for building decoupled websites. It comes with a built-in REST API for all content, a system to query data from Drupal (e.g. REST exports in the Views module), and a BigPipe implementation. It will be even better once the full range of contributed modules dealing with REST is available to developers.

With Drupal 8, an exciting spectrum opens up beyond just the two extremes of fully decoupling and traditional Drupal. As we've seen, fully decoupling opens the door to a Drupal implementation providing content to a broad range of one or more clients ("many-headed" Drupal), such as mobile applications, interactive kiosks, and television sets. But progressive decoupling goes a step further, since you can fully decouple and progressively decouple a single Drupal site with all the tools to assemble content still intact.

What is next for decoupling in Drupal?

Progressively decoupled Drupa with many heads

In the case of many-headed Drupal, fully decoupled applications can live alongside progressively decoupled pages, whose skeletons are rendered through the CMS.

Although Drupal has made huge strides in the last few years and is ahead of the competition, there is still work to do. Traditional Drupal, fully decoupled Drupal, and progressively decoupled Drupal can all coexist. With improved decoupling tools, Drupal can be an even better hub for many heads: a collection of applications and sites linked and supported by a single backend.

One of the most difficult issues facing front-end developers is network performance with REST. For example, a REST query fetching multiple content entities requires a round trip to the server for each individual entity, each of which may also depend on relational data such as referenced entities that require their own individual requests to the server. Then, to gather the required data, each REST query needs a corresponding back-end bootstrap, which can be quite hefty. Such a large stack of round trips and bootstraps can be prohibitively expensive.

Currently, the only way to solve this problem is to create custom API endpoints (e.g. in Views) that comprehensively provide all the data needed by the client in a single response to minimize round trips. However, managing endpoints for each respective client can quickly spiral out of control given the range of different information each client might demand and number of deployed versions in the wild. On the other hand, if you are relying on a single endpoint, updating it requires modifying all the JavaScript that relies on that endpoint and ingests its response. These endpoint management issues can force the front-end developer to depend on work the back-end developer must complete, which creates bottlenecks in decoupled teams.

Beyond performance and endpoint management, developer experience also suffers under a RESTfully decoupled Drupal architecture. For each individual request, there is a single corresponding schema for the response that is immutable. In order to retrieve differently formatted or filtered data according to your needs, you must create a second variant of a given endpoint or provision a new endpoint altogether. Front-end developers concerned about performance limitations desire full control from the client side over what schema comes back and want to avoid working with the server side.

Decoupling thus reveals the need to investigate better ways of exposing data to the client side. As our pages and applications become ever more component-driven, the complexity of the queries we must perform and our demands on their performance increase. What if we could extract only the data we need by writing queries that are efficient and performant by default? Sebastian Siemssen proposes using Facebook's GraphQL (see demo video and project on due to the client's explicit definition of what schema to return and the use of consolidated queries which break apart into smaller calls and recombine for the response, thereby minimizing round trips.

I like GraphQL's approach for both fully and progressively decoupled front-ends. It means that decoupled sites will enjoy better overall performance and give front-end developers a better experience: very few round trips to the server, no need for custom endpoints, no need for versioning, and no dependence on back-end developers. (I even wonder if GraphQL could be the basis for a future version of Views.)

In addition, work on rendering improvements such as BigPipe should continue in order to explore the possibilities given a more progressive rendering system. It's currently in progress to accelerate Drupal's perceived page loads where users consume expensive or personalized content. As for tools within the administration layer and in the contributed space, further progress in Drupal 8 is also necessary for layout management tools such as Panels and block placement that would make decoupling at a granular level much easier.

A great deal is being done, but we could always use more help; please get in touch if you're interested in contributing code or funding our work. After all, the potential impact of progressive rendering on the future of decoupled Drupal is huge.


Decoupled content management is a rapidly evolving trend that has the potential to upend existing architectural paradigms. Nonetheless, we need to be cautious when approaching decoupled projects due to the loss of functionality and performance.

With Drupal 8, we can use progressive rendering to build a Drupal-driven skeleton of the page to fill out increasingly expensive portions of the page. We can then selectively delineate which parts of the page decouple once the page load is complete. This concept of progressive decoupling offers the best of both worlds. Layout management, security, and content previews are unaffected, and within page components, front-end application logic can work its magic.

Drupal 8 is now a state-of-the-art platform for building projects that lie at the nexus between traditional content-rich websites and modern interaction-rich web applications. As always, while remarkable strides have been made with Drupal 8, more work remains.

Special thanks to Preston So and Wim Leers at Acquia for contributions to this blog post and to Moshe Weitzman, Kevin O'Leary, Christian Yates, David Hwang, Michael Pezzi and Erik Baldwin for their feedback during its writing.

Digital Distributors: The Supermarkets of the Web

It's hard to ignore the strong force of digital distributors on the open web. In a previous post, I focused on three different scenarios for the future of the open web. In two of the three scenarios, the digital distributors are the primary way for people to discover news, giving them an extraordinary amount of control.

When a small number of intermediary distributors get between producers and consumers, history shows us we have to worry -- and brace ourselves for big industry-wide changes. A similar supply chain disruption has happened in the food industry, where distributors (supermarkets) changed the balance of power and control for producers (farmers). There are a few key lessons we can take from the food industry's history and apply them to the world of the web.

Historical lessons from the food industry

When food producers and farmers learned that selling directly to consumers had limited reach, trading posts emerged and began selling farmers' products more than their own. As early as the 14th century, these trading posts turned into general stores, and in the 18th century, supermarkets and large-scale grocery stores continued that trend in retailing. The adoption of supermarkets was customer-driven; customers benefit from being able to buy all their food and household products in a single store. Today, it is certainly hard to imagine going to a dozen different speciality stores to buy everything for your day-to-day needs.

But as a result, very few farmers sell straight to consumers, and a relatively small amount of supermarkets stand between thousands of suppliers and millions of consumers. The supermarkets have most of the power; they control how products are displayed, and which ones gain prominent placement on shelves. They have been accused of squeezing prices to farmers, forcing many out of business. Supermarkets can also sell products at lower prices than traditional corner shops, leading to smaller grocery stores closing.

In the web's case, digital distributors are the grocery stores and farmers are content producers. Just like supermarket consumers, web users are flocking to the convenience and relevance of digital distributors.

Farmers supermarkets

This graph illustrates the market power of supermarkets / digital distributors. A handful of supermarkets / digital distributors stand between thousands of farmers / publishers and millions of consumers.

Control of experience

Web developers and designers devote a tremendous amount of time and attention to creating beautiful experiences on their websites. One of my favorites was the Sochi Olympics interactive stories that The New York Times created.

That type of experience is currently lost on digital distributors where everything looks the same. Much like a food distributor's products are branded on the shelves of a supermarket to stand out, publishers need clear ways to distinguish their brands on the “shelves” of various digital distribution platforms.

While standards like RSS and Atom have been extended to include more functionality (e.g. Flipboard feeds), there is still a long way to go to support rich, interactive experiences within digital distributors. As an industry, we have to develop and deploy richer standards to "transport" our brand identity and user experience from the producer to digital distributor. I suspect that when Apple unveils its Apple News Format, it will come with more advanced features, forcing others like Flipboard to follow suit.

Control of access / competition

In China, WeChat is a digital distributor of different services with more than 0.5 billion active users. Recently, WeChat blocked the use of Uber. The blocking comes shortly after Tencent, WeChat's parent company, invested in rival domestic car services provider Didi Kuaidi. The shutdown of Uber is an example of how digital distributors can use their market power to favor their own businesses and undermine competitors. In the food industry, supermarkets have realized that it is more profitable to exclude independent brands in favor of launching their own launching grocery brands.

Control of curation

Digital distributors have an enormous amount of power through their ability to manipulate curation algorithms. This type of control is not only bad for the content producers, but could be bad for society as a whole. A recent study found that the effects of search engine manipulation could pose a threat to democracy. In fact, Google rankings may have been a deciding factor in the 2014 elections in India, one of the largest elections in the world. To quote the study's author: "search rankings could boost the proportion of people favoring any candidate by more than 20 percent -- more than 60 percent in some demographic groups". By manipulating its search results, Google could decide the U.S. presidential election. Given that Google keeps its curation algorithms secret, we don't know if we are being manipulated or not.

Control of monetization

Finally, a last major fear is control of monetization. Like supermarkets, digital distributors have a lot of control over pricing. Individual web publishers must maintain their high quality standards to keep consumers demanding their work. Then, there is some degree of leverage to work into the business model negotiation with digital distributors. For example, perhaps The New York Times could offer a limited-run exclusive within a distribution platform like Facebook's Instant Articles or Flipboard.

I'm also interested to see what shapes up with Apple's content blocking in iOS9. I believe that as an unexpected consequence, content blocking will place even more power and control over monetization into the hands of digital distributors, as publishers become less capable of monetizing their own sites.

Digital Distributors vs Open Web: who will win?

I've spent a fair amount of time thinking about how to win back the Open Web, but in the case of digital distributors (e.g. closed aggregators like Facebook, Google, Apple, Amazon, Flipboard) superior, push-based user experiences have won the hearts and minds of end users, and enabled them to attract and retain audience in ways that individual publishers on the Open Web currently can't.

In today's world, there is a clear role for both digital distributors and Open Web publishers. Each needs the other to thrive. The Open Web provides distributors content to aggregate, curate and deliver to its users, and distributors provide the Open Web reach in return. The user benefits from this symbiosis, because it's easier to discover relevant content.

As I see it, there are two important observations. First, digital distributors have out-innovated the Open Web in terms of conveniently delivering relevant content; the usability gap between these closed distributors and the Open Web is wide, and won't be overcome without a new disruptive technology. Second, the digital distributors haven't provided the pure profit motives for individual publishers to divest their websites and fully embrace distributors.

However, it begs some interesting questions for the future of the web. What does the rise of digital distributors mean for the Open Web? If distributors become successful in enabling publishers to monetize their content, is there a point at which distributors create enough value for publishers to stop having their own websites? If distributors are capturing market share because of a superior user experience, is there a future technology that could disrupt them? And the ultimate question: who will win, digital distributors or the Open Web?

I see three distinct scenarios that could play out over the next few years, which I'll explore in this post.

Digital Distributors vs Open Web: who will win?

This image summarizes different scenarios for the future of the web. Each scenario has a label in the top-left corner which I'll refer to in this blog post. A larger version of this image can be found at

Scenario 1: Digital distributors provide commercial value to publishers (A1 → A3/B3)

Digital distributors provide publishers reach, but without tangible commercial benefits, they risk being perceived as diluting or even destroying value for publishers rather than adding it. Right now, digital distributors are in early, experimental phases of enabling publishers to monetize their content. Facebook's Instant Articles currently lets publishers retain 100 percent of revenue from the ad inventory they sell. Flipboard, in efforts to stave off rivals like Apple News, has experimented with everything from publisher paywalls to native advertising as revenue models. Expect much more experimentation with different monetization models and dealmaking between the publishers and digital distributors.

If digital distributors like Facebook succeed in delivering substantial commercial value to the publisher they may fully embrace the distributor model and even divest their own websites' front-end, especially if the publishers could make the vast majority of their revenue from Facebook rather than from their own websites. I'd be interested to see someone model out a business case for that tipping point. I can imagine a future upstart media company either divesting its website completely or starting from scratch to serve content directly to distributors (and being profitable in the process). This would be unfortunate news for the Open Web and would mean that content management systems need to focus primarily on multi-channel publishing, and less on their own presentation layer.

As we have seen from other industries, decoupling production from consumption in the supply-chain can redefine industries. We also know that introduces major risks as it puts a lot of power and control in the hands of a few.

Scenario 2: The Open Web's disruptive innovation happens (A1 → C1/C2)

For the Open Web to win, the next disruptive innovation must focus on narrowing the usability gap with distributors. I've written about a concept called a Personal Information Broker (PIM) in a past post, which could serve as a way to responsibly use customer data to engineer similar personal, contextually relevant experiences on the Open Web. Think of this as unbundling Facebook where you separate the personal information management system from their content aggregation and curation platform, and make that available for everyone on the web to use. First, it would help us to close the user experience gap because you could broker your personal information with every website you visit, and every website could instantly provide you a contextual experience regardless of prior knowledge about you. Second, it would enable the creation of more distributors. I like the idea of a PIM making the era of handful of closed distributors as short as possible. In fact, it's hard to imagine the future of the web without some sort of PIM. In a future post, I'll explore in more detail why the web needs a PIM, and what it may look like.

Scenario 3: Coexistence (A1 → A2/B1/B2)

Finally, in a third combined scenario, neither publishers nor distributors dominate, and both continue to coexist. The Open Web serves as both a content hub for distributors, and successfully uses contextualization to improve the user experience on individual websites.


Right now, since distributors are out-innovating on relevance and discovery, publishers are somewhat at their mercy for traffic. However, a significant enough profit motive to divest websites completely remains to be seen. I can imagine that we'll continue in a coexistence phase for some time, since it's unreasonable to expect either the Open Web or digital distributors to fail. If we work on the next disruptive technology for the Open Web, it's possible that we can shift the pendulum in favor of “open” and narrow the usability gap that exists today. If I were to guess, I'd say that we'll see a move from A1 to B2 in the next 5 years, followed by a move from B2 to C2 over the next 5 to 10 years. Time will tell!


Updates from Dries straight to your mailbox