The Browser Engine That Could

From a browser engine that started as the lesser known option used in an obscure browser to one that would take hold over the entire browser market.


You may sometimes come across the term browser diversity. It refers to an equilibrium on the web platform where there are enough browser implementations in the wild to encourage innovation and competition between browsers. The alternative is a browser monoculture, whereby one browser, or browser implementation, controls the market and therefore controls the direction of the web. When one promotes browser diversity, they are often advocating on behalf of the independent web standards process maintained by the W3C, which only works when no one browser can dictate the features included on the web platform.

The web community has more than enough reason to be fearful of a lack of browser diversity. After Internet Explorer captured over 90% of the browser market share in the early 2000’s, it took them the better half of a decade to release a new browser. During that time, the web ground to a halt as security issues crept in. The web was worse for it so we often want to see browsers competing instead of monopolizing the web.

But there’s a flip side to this. Different browser makers mean developers of the web need to ensure compatibility across all platforms. There can be inconsistencies between browsers, however slight, that can make it more difficult to develop on the web.

This was a concern of Tim Berners-Lee, the web’s creator. Even in the early 90’s, when the web was very young, dozens of browsers began pouring out of every corner of the world as software developers experimented with the web platform. Berners-Lee was worried that too many browsers would make it difficult to test sites and reach consensus about how HTML should be parsed and delivered to users.

Writing in 1992, Tim Berners-Lee outwardly expressed this concern on the hypertext mailing list.

Noone is going to support 8 parsers on 12 platforms. I am therefore a little worried about the proliferation of implementations. (I know, I’m rather pleased about it too! 🙂 I would like to see one or maybe two definitive libraries around (two, so to test the first one for self-consistent bugs), but not four. I feel that if there are too many, then there will be cases of little things which work on one but not on the others, because there is not enough support effort for each.

Berners-Lee was mindful of the fact that several implementations might require too much effort on the part of the web developer, so developers might be tempted to avoid the problem altogether by only testing on the most popular browser (little did he know that this would be a recurring problem in web development). So even without a monopoly, too many browsers might crowd the market and create a de facto winner that everyone standardizes against, while ignoring all others. Ensuring consistency across browser implementations was the onus behind creating the W3C and promoting a set of common standards in web technologies.

Berners-Lee’s concerns were not unfounded. There have been times when browsers went their own way, requiring web designers to jump through hoops to get things to look good everywhere. In recent years, a more mature standards process has largely mitigated the problem of inconsistencies between browsers by outlining a clear process for how features are added to the web while ensuring that there is balance in the decision making process.

Fundamental to a diverse group of browsers is the development of an equally varied set of browser engines. A browser engine is the bit of code inside of a browser that takes the code that you write and renders it out on the page. At a more technical level, it parses HTML and CSS and JavaScript to handle layout and rendering of webpages. This is typically includes a JavaScript engine that handles the interactivity piece. Typically, discussion of a browser engine also includes its connected JavaScript engine, but not always.

Before web standards were adopted by browser makers, in the midst of the browser wars, there was a decade-long period of dominance by Microsoft’s browser Internet Explorer. Internet Explorer used Microsoft’s proprietary browser engine Trident. Internet Explorer shipped for free and was on the default on every Windows PC. Propelled by the sheer velocity of this distribution, Trident was the most widely used browser engine for many years.

However, a few smaller open source browsers soon rose quickly through the ranks. The most popular among these was Mozilla’s Firefox (based on Netscape), which used an engine called Gecko. Trailing behind a bit was Opera, with their browser engine Presto. And beloved by a small community, but a blip on the browser market rankings, was the Konquerer browser which used an engine known as KHTML.

Minus Presto, the source code for these browser engines were open source, meaning they were accessible to anyone and could be used for any project. This promoted collaboration and healthy competition between these new independent browsers which, only served to foster the web standards process managed by the W3C. And because everything was open source, a number of smaller, niche browsers entered the market, built on top of the same browser engines.

Apple is notably absent from that list. In 2003, it seemed as if they had missed the web platform entirely. They had tried, and subsequently failed, to make a browser of their own called Cyberdog. Without something native to their operating system, Apple computers shipped with Internet Explorer, tethering them to one of their biggest rivals in Microsoft.

But a rumor had begun circulating that Apple was working on a new browser. A few Mozilla veterans had joined the company, contributing to the open source Mac version of the Firefox browser on behalf of Apple. And the tools and implementations had gotten a lot better since their first failed experiment. There were a few seasoned options available as open source technology.

The default assumption was that Apple would go with Mozilla’s Gecko as their browser engine of choice. They already had the staff and experience with it, and it was the engine behind a large majority browser projects out there, including the Mac browser Camino, developed by an independent team outside of Apple. A few outliers thought that maybe Apple might even go with Presto instead in some sort of licensing deal.

Steve Jobs closed his Macworld 2003 keynote with the announcement everyone was expecting. Apple was making a browser of their own. It was called Safari. And much to everyone’s surprise, it would use the browser engine from Konquerer. Not Gecko. Not Presto. KHTML. The news was big, and it would change the trajectory of the web for the next two decades.

The Konqueror browser, running inside of the KDE desktop environment

The Konquerer browser is one of several applications that are part of the KDE desktop environment for Linux computers. It’s not an operating system, strictly speaking, but a suite of common software made to look and feel the same. The acronym originally stood for Kool Desktop Environment, before being shortened to just KDE. Konquerer is one program inside of KDE. And KHTML is the engine that runs Konquerer. Like Linux itself, the entirety of KDE is open source, including its browser, and its community has been committed to the principles of free software since its founding.

Which is why it’s important that the announcement from Apple included this slide:

Slide from Steve Job's Macworld presentation which reads "Open Source, We Think it's great"

Apple’s message about using KHTML included a commitment to open source. The Safari team promised to contribute their changes back to the KHTML project whenever possible. While adapting the engine, Apple broke it up into two pieces, WebCore for rendering and layout, and JavaScriptCore for JavaScript. Both of those hunks were made available as open source projects, though the Safari bug tracker and elements of the browser engine code remained closed. Still, this level of transparency was surprising best known for guarded secrets.

The first public version of Safari was released alongside the announcement in January of 2003. Not longer after, it shipped as the default browser on all Macs. Within a year or two, Safari plateaued at about two to three percent of the browser market. It was enough to be a dominant voice, but it gave the team at Apple a seat at the table.

Even with an explicit promise to the open source community, the Safari team found it difficult at first to port changes they were making back to the KTHML project. Some of the code was specific to the Mac operating system. Other parts were just incompatible with the existing codebase. This led to a bit of infighting for a couple of years between the team at Apple and contributors to KHTML.

Eventually, a middle ground was struck. In June of 2005, Apple merged together JavaScriptCore, Webcore and the remaining browser engine components and released them as a single open source project with a public tracker and a open call for contributors. This renewed commitment sent a signal to the KHTML community that they were willing to collaborate while still technically keeping the code independent.

They called the project Webkit.

That timing is crucial. The web, as a platform, was accelerating through its Web 2.0 phase. Web 2.0 was little more than a fancy marketing term, but it did attempt to give a name to a rise in advanced applications that existed entirely on the web. This included two of the earliest web applications from Google, Google Maps and Gmail. Within a year, Google would add YouTube to their list as well.

In the mid-2000’s, some engineers at Google took a hard look at browsers and found that they were unable to keep up with the demand of complex applications. This was especially true of older and unmanned browsers like Internet Explorer (YouTube, incidentally, is thought to have had its hand in moving users away from IE6). But Google’s primary focus was on speed. The co-founders of the company once expressed a desire to create a web as fast as flipping through a magazine. That was the goalpost, and Google felt that even modern browsers like Firefox and Safari weren’t reaching it quick enough.

By 2006, they began drawing up plans for a browser of their own. They wanted a browser that was faster than anything on the market. After experimenting with several browser engines, the Google team took a look at the now open source Webkit project. Its code was concise and readable, and its footprint remained fairly low compared to engines with history behind it, like Gecko or Trident (which wasn’t an option anyway because it was closed source). But most importantly it was fast. As a starting point, it was faster than anything else out there.

If the Safari team hadn’t decided to open source their browser engine at precisely that time, this article might have gone in a different direction. But they did, so Google took a look and decided to use it. Over the next couple of years they began working on a new implementation of Webkit that was even faster.

Their first step was to remove JavaScriptCore and replace it with a homegrown JavaScript engine called V8, named for the high performance piston engine. As the name might suggest, V8 was quick. Ten times more performant than JavaScriptCore out the gate. It gained that speed by running in a virtual machine and running code a bit differently, which made it modular and independent from the rest of the browser engine. Incidentally, within a couple of years, this modularity would make possible the Node programming language which took the V8 engine, ripped it out of the browser and moved it onto the server.

The second thing Google did was to add a multiprocess architecture to their browser. There’s a lot that can be said about multiprocessing, but it’s best explained by thinking about browser tabs. Before, if one tab crashed, then your whole browser would crash. But Google was able to run processes in each tab independently so that only a single tab would crash at a time. This would become a major selling point for the browser once it was released. It was also an important decision because it was a major break from the way Webkit was doing things previously. That decision would soon come back around.

In September of 2008, Google officially announced their browser project to the public. They paired the announcement with a nerdy webcomic drawn by Scott McCloud that broke down all of the technical details. The browser was able to run websites faster than anything on the market, and thanks to its multiprocess architecture, it could keep webpages active even if another site crashed somewhere else. This was at a time when browsers were still inundated with toolbars and add-ons and big software suites that included email. Google’s name for the project even played on this concept. In the browser world, chrome refers to all the extra stuff around a webpage, the address bar and toolbars and file menus. Google removed a lot of that cruft. So they called the browser Google Chrome, hoping that a bet on simplicity and performance would pay off.

A few panels from the Google Chrome launch webcomic which describes the browser as both simple and open source
A snippet of the webcomic that started it all

Minus their locked down search algorithm, Google has contributed to and created open source projects over the years. Taking their cue from Apple, they went even further with Chrome. On the same day that Google Chrome was announced, Google made the entire browser available as an open source project called Chromium. This wasn’t just a few extensions to Webkit. It wasn’t just a rendering engine. It was the whole thing, Webkit, the V8 JavaScript engine, and all of the code for the browser itself. Without too much effort, any developer could package up Chromium and release their own browser. Since it was released, many browsers have.

2008 was a big year for the web. The same month Chrome was released, Google previewed their Android mobile operating platform, itself pieced together from a number of other open source projects. It included a mobile version of Google Chrome. That was a little over a year after Apple officially debuted the iPhone, which shipped with a modified version of Safari. By the time Steve Jobs got around to killing Flash for good and making the native web platform a priority, these devices would be everywhere.

So now you had every smart mobile device running Safari or Chrome. Safari alone double it’s market share in just a few years based entirely on iPhone adoption. Chrome rose at an even faster rate. Using Chromium, you had an ever widening set of independent browsers and browser projects. Chromium also became the foundation for the Node programming language, and the desktop application framework Electron.

And all of those projects were all powered by Webkit. Webkit had gone from capturing a small slice of the browser space to pretty much dominating it in the span of a few years.

This stasis would not hold for long. In April of 2013, Google announced, seemingly out of nowhere, that they would be forking the Webkit project into a new engine called Blink. Google Chrome, and the Chromium project, would be moving to this new engine.

There were several reasons for the move, but the biggest one comes right back to that multiprocessing problem. For years, Google had been extending Webkit to add multiprocessing to their browsers, a defining feature and performance boost for Chromium browsers. In 2013, the Webkit project was due to release a new version of the engine that implemented multiprocessing as well. The problem was, its implementation would be entirely different and not compatible with Chrome’s. They were already using a different JavaScript engine and this new change would see Chromium diverge too far from the original project. The core of Blink would still be, and still is, Webkit. But it would take a new path and track itself in a new project.

At this point, the web had become more advanced than anyone had ever imagined. The problem that Berners-Lee first described, too many independent developers creating browser engines of their own, was no longer a problem. It takes teams of people to build and maintain a browser engine. With the web platform expanding, and the web standards solidifying, the complexity of rendering a website onto a page is has exponentially grown. A browser needs to run an untold number of sites coded in an untold number of ways and more or less match every other browser.

After that, browser engines started disappearing.

Two months prior to the Blink release, Opera had announced that they would be dropping their own engine Presto, to move their browser to Chromium. When Google announced the fork, Opera followed. In May of 2013 they released their first Blink based browser.

Microsoft, meanwhile, held out for years with their own closed source engine. Trident was retooled into an engine made for Microsoft Edge called EdgeHTML, first released in 2015. But maintaining the resources to develop an independent engine proved too be too difficult in an already crowded browser market. In 2019, they announced that they too were moving to Blink. That browser has also since launched.

Descendants of KHTML, browsers backed by engines in the Blink / Webkit family, account for over 90% of browser use. From practical oblivion to 90% market share in fifteen years. An enormous achievement. And one that is not without consequences.

Blink and Webkit are two different engines and their source code has diverged quite a bit. But they take the same approach to rendering webpages, and much of the code at the center of the project remains the same. Which means that at this point, there are effectively only two browser engine groups left. The Blink / Webkit family and Firefox’s Gecko. Even if you were to break apart Blink and Webkit, that still only leaves us with three.

Which brings us right back around to browser diversity. Innovation in browsers has not gone away, and thankfully, every major browser is committed to the web standard process. However, if the Blink and Webkit community decided that they wanted to take the web in a certain direction, they absolutely have the power to do so. That’s true right now, even with Gecko still holding on.

Jeffrey Zeldman summed this up well in the wake of Microsoft’s decision to move to Blink:

When one company decides which ideas are worth supporting and which aren’t, which access problems matter and which don’t, it stifles innovation, crushes competition, and opens the door to excluding people from digital experiences.

From a historical perspective, Webkit’s trajectory is nothing short of a marvel to behold. And it has done so by being open and community driven. But it is equally important to

Sources