What even is Vanilla JS these days?

Originally it was non-jQuery, right? Or did it come before that? Anyway the term definitely got popular when people were eschewing jQuery in the quest for lighter pages at the expense of a few browser bugs.

Zero dependency libraries became a thing, which meant each library had their own tiny abstraction of DOM selection utilities and polyfills for Array methods. None of which could be extracted into shared dependencies and cached separately of course but, hey, they were 10x lighter and 20x faster than jQuery so what was there to worry about?

Then jQuery fell way off the radar due to a surge in browsers becoming evergreen, our eagerness to drop older, painful, browsers, and the proliferation of sites like youmightnotneedjquery.com. With jQuery out of the equation these days vanilla is much more likely to refer to the absence of frameworks like Ember, Angular, React or Backbone, of which only the latter requires jQuery.

In Paul Lewis’ recent article on the performance comparisons of frameworks he highlighted a vanillajs implementation of TodoMVC which was 16kb: significantly smaller than the other frameworks but certainly not tiny. Primarily it’s smaller as it is can be focused on this one specific purpose allowing for greater optimisation but making it somewhat throwaway after the life of the project. And, of course, it still has to reimplement a bunch of the same features that are present in other libs.

What makes this vanilla? Sure, it doesn’t have any dependencies but what makes up that 16kb?

It includes tiny abstractions for querySelectorAll and DOM events which you’d absolutely expect as developer conveniences. It include it’s own implementation of a micro templating library which focuses only on the todo template but still covers non-trivial html escaping.

It registers model.js, controller.js and view.js, it is todoMVC after all but it’s starting to look suspiciously like my-framework.js rather than vanillajs. In fact it’s really looking like a less-tested and less-jQuery snowflake version of backbone. This isn’t hating on the particular example on the todomvc page it just gets you wondering where the line is drawn between vanillajs and.. flavoured js?

Is it vanillajs if you don’t include a framework but you do include lots of tiny libs as dependencies? Is it vanillajs if it’s written in TypeScript? Is it wise to care about any of this? Is it a worthy goal?

Whilst your own implementation of these features can be smaller and more focused, certainly more performant, is chasing this title going to create a less buggy application? Will it be safer and more secure than a framework which has the benefit of a huge user base and collective intelligence? Are you going to have to reimplement features every time requirements change and could this lack of manoeuvrability end up causing costs to you and your user greater than the extra perf differences?

Anyone who I’ve worked with surely attest that I’m not a fan of debating terminology. It gets in the way of doing actual work and, truthfully, everyone else is better than me at it anyway. Vanillajs is a term that is gathering so much momentum though, and conflating so many ambiguous combinations that it either needs to be defined or descend into utter meaninglessness. And if it’s the latter we need to go back and update thousands of blog posts and slidedecks so maybe it’s best to just nip it in the bud now.

Practical Questions around Web Components

Components allow us to build simpler applications by composing independent parts into a greater system. Good component design means you should only need to focus on one component at a time, holding less information in your active mind in order to complete a task. This gives you more mental capacity to focus on the task at hand and ultimately make better decisions.

The future standard of componentisation is Web Components and I’ve been trying to get my head around them for a while: how they marry up with current challenges and innovations in component design and where they fit going forward. Even though I have far from a clear picture on where Web Components are going (Wilson Page wrote a great article on where they are right now) I think there’s a benefit to getting these questions down to find out how others see these problems, whether they’re problems at all, or if this is even just the tip of the iceberg.

I’m not going to go into the basics of what makes up the Web Component spec, there are plenty of such guides already written. Instead what I’d like to cover is some very practical questions:

  1. Do Custom Element names bring any benefit?
  2. Can we progressively enhance Web Components?
  3. Can we render Web Components on the client and server?
  4. Do Web Components force us to assume dependencies?
  5. Is there an implicit dependency on http/2?
  6. Are third party components ever going to be that usable?
  7. What is going to be the tipping point for their adoption?

Do Custom Element names bring any benefit?

Current HTML elements only have semantic meaning because they exist within a dictionary. We know what h1 and cite mean because they have been deliberated and standardised. Browsers, search engines and screen readers also understand their meaning and use them to determine the intended use of a selection of elements. Custom elements belong to no such dictionary. Not yet at least. <my-tabs> has as much relevance as <foo-bar> unless renderers decide to infer meaning.

Alex Russell spoke on the ShopTalk show about how, in the long term, the use of Custom Elements across the web will allow us to know exactly which components developers need and use this to prioritise the development of native components. There is merit to that approach for sure, though perhaps the bootstrap components list could take us 90% of the way there immediately.

Some people have noted how much cleaner the DOM can look with Custom Elements and it’s hard to disagree. I mean, <my-tabs> is definitely nicer to look at than <div class='my-tabs'> but it’s hardly a game-changer and it doesn’t really provide anything compelling to be excited about. A simple looking DOM is no guarantee of a simpler system, it may only be pushing the complexity into the shadows.

Can we progressively enhance Web Components?

It’s pretty clear that Web Components have a strong reliance on JavaScript. With JS turned off the user is left with only the information we placed in the DOM to begin with. Layout-wise, any unknown elements will resolve to the equivalent of a span.

Extension of current elements is on the cards and would allow you to construct your elements with all the benefits of current native ones. In this post by Jeremy Keith he discusses how we can use the is- syntax to extend standard elements. It looks like this approach may not be agreed upon and so might not find it’s way into v1, though there are still ways of extending native elements when declaring your components.

We are still able to have markup within our “light DOM” and this is ultimately where I think you will have the opportunity to make the component meaningful prior to the Web Component being initialised. There are small wins you can make just by prioritising content over attributes, both of which are entry points to the Web Component. e.g.

<user-greeting user="ian" />

How you declare your data still matters though. If the shadow DOM for the component looks something like:

<template id="user-greeting">
    div { padding: 30px; color: red; }
    Hello <content></content>, welcome back!

Non-enhanced user experiences will only see the word “Ian” instead of “Hello Ian, welcome back!”. This by itself makes no sense because it lacks all context. Determining whether or not we can build resilient and progressively-enhanced applications with Web Components may fall entirely on the thought process and design around how we describe each interface.

What about styling and the Flash of Unstyled Content?

Let’s assume the user has JavaScript but is on a slow connection and we’re loading the component asynchronously. By adding styles to the component we have introduced a FOUC before it is loaded, how do we plan for this? How will the component actually render before it’s initialised?

One approach would be to serve a very small subset of styles upfront to ensure that all the raw DOM looks “ok” before being enhanced. This wouldn’t prevent a FOUC and reflow but it would make it slightly nicer.

Another approach would be to bundle all the CSS from within the components and serve it in the head. Crucially, this relies on the page having to know exactly which components it will show at any one time: something we would love to avoid in an ideal world where we can arbitrarily load components into any page.

This is an issue that isn’t just limited to unsupported browsers. Currently, if you’re trying out html imports, you’ll find that you can simply place them higher up in the DOM and avoid any FOUC. This has the downside of being a blocking request and a bottleneck for the page. If html imports don’t become part of the standard or if we just want to load them asynchronously for performance, we will need to anticipate and prepare for the FOUC.

Can we render Web Components on the client and server?

Just as we want to make our component imports non-blocking to speed up page-load we may also want to render the initial page on the server. This approach was originally taken by airbnb, popularised by techniques with React and more recently adopted by Ember in Fastboot. Can we do the same with Web Components? Is this even a goal for the Web Components project?

Obviously we can send <user-greeting> over the wire, it’s just a string. But we can’t really call that a Web Component as such, at least not a fully formed one. What about the inner content? If possible we want to render all of the component so that instead of the client receiving:


they would instead receive:

<div id="user-greeting">
    div { padding: 30px; color: red; }
  <div>Hello Ian, welcome back!</div>

This approach comes with a pack of Aspirin because we have just turned Shadow DOM into real DOM. Our component’s CSS has just become global CSS, ready and willing to conflict with every other element we have on the page. If we included any scripts within the component these will also have become blocking unless we load them asynchronously.

There are a selection of hacky ways to get around the styles problem:

  • Use BEM or another class management strategy
  • Add a generated class on the parent and scope all styles within that
  • Transform all the styles to be inline styles on the elements

None of these are particularly nice: BEM and generated classes are stepping stones to true scoping that we’d like to leave behind and moving all the styles into inline ones doesn’t cater for pseudo classes.

Perhaps here we need something standardised and declarative like shadowroot which would allow us to define which styles shouldn’t bleed? Regardless of the solution, this feels like a feature which is needed in order to not slow the web further.

Do Web Components force us to assume dependencies?

HTML imports don’t currently support <link> elements so more commonly we’ve seen examples of <style> used within the Shadow Dom. Does this force us to assume other dependencies have already been satisfied? With CSS, dependencies have always been somewhat implicit and in the world of components, where we should have true interopability, is this going to highlight an issue we’ve always sidestepped?

What are we talking about when we talk about CSS dependencies? Essentially anything that you would often rely upon as automatically being present: resets or normalize, grid systems, typographic styles, utility classes etc. Should this be The End of Global CSS? This area of CSS development is ripe for investigation and experimentation.

We can again turn to JS to look for a solution to both fetching and authoring. Authoring CSS in JS is becoming much commonplace recently, with some really exciting experiments happening, but is unlikely to become the methodology for the masses.

Css-modules provide an interesting middle-ground with a huge amount of potential. Glen Maddern has written more about them in Interoperable CSS.

Is there an implicit dependency on http/2?

How http/2 will change the way we structure and build our applications, particularly in relation to asset loading, is still being determined but one highly touted feature is the reduced need for bundling assets together. In theory, it will be more performant to request multiple small assets rather than one large bundle because we will make the cache more granular and not lose that much on the network.

Web Components embrace the philosophy of independent modules and together with http/2 you can immediately imagine a nice no-bundle workflow. Of course we still have to cater for non-http/2 users and without concatenation there would be an increase in the number of requests. Would the adoption of Web Components, right now, significantly slow a web which is predominately pre http/2?

Currently Vulcanise exists to help with this by bundling your imports and their dependencies and I’m sure support for bundling Web Component assets will arrive in your preferred build tool in the future. Personally I look forward to a workflow that allows me the flexibility to arbitrarily load components into the page and for them to just work, without considering the assets for the system as a whole. For now, and potentially forever, that’s certainly not realistic.

Are third party components ever going to be that usable?

There have been a lot of articles and talks that have compared the future Web Components ecosystem to the current jQuery plugins ecosystem. jQuery plugins thrived because they were plug and play, and they struggled because there were a thousand+ plug and play implementations of jQuery.tabs.

One of the hopes for Web Components is that the cream rises to the top and that we can promote the “best” components. This is clearly not a simple task and expecting it to happen purely by drawing a line in the sand over the jQuery era will not be enough. A quick look at the react or angular ecosystems shows they still have their fair share of implementations.

These plugins had one big benefit though: jQuery was their one and only dependency. They also had no transitive dependencies where A depended on B which depended on C, because, well, B was jQuery which did everything itself. Transitive dependencies is where things get tricky as they need to be resolved across components. These dependencies could be anything from Angular to RxJS or lodash to polyfills. It’s likely we will also see our fair share of “zero-dependency components” attempting to sidestep the problem by inlining their own modular builds of utility libraries and adding extra weight you can’t possibly extract or de-duplicate.

Of course, the closed concept of Web Components mean you absolutely could have different libraries inside and outside and it would still work, but just because you can doesn’t necessarily mean you should: we should always want to avoid needlessly transferring bytes across the wire.

Finding the right 3rd party component whose dependencies matches yours is going to be extremely hard and because of that the ecosystem is going to be forced to fragment in order to support it. Right now it definitely feels like Web Components have more strength as an architecture strategy for reuse within a single project than as a catalogue of plug and play third party components.

What is going to be the tipping point for their adoption?

The collective specs for Web Components are large and unsurprisingly in spec world they have taken a long time to reach agreement across all the vendors. Meanwhile, innovation in the component world elsewhere has been staggering and has pushed the boundaries of what is both possible and expected from a component ecosystem: to the point that, for many, a v1 release of Web Components will receive a collective shrug rather than open arms. For these people, their tools have already evolved to meet the needs that Web Components are offering.

That’s not to say that Web Components can’t still take part of that market. Native features provide performance and consistency benefits and frameworks are anticipating that the technologies can co-exist. There is also a huge portion of the developer market who aren’t invested in these new component frameworks and who can see real benefits in adopting something simple and straightforward. When are those people going to get involved? When are we going to see some real life examples?

Polyfills are available and the area is ready for experimentation and innovation. The problem is that few people are.

In order for the specs to succeed they are going to need feedback and the sooner that can be received the better. Is v1 going to be the tipping point? Will it provide enough for developers to start thinking of Web Components as a feasible solution rather than something coming down the line?

This isn’t meant to be over dramatic, hopefully it’s more of a ‘when’ rather than ‘if’ question. The harsh reality though is if in a year the only examples we still have are Github’s <time> element and Google event pages it will be even harder for Web Components to gain adoption, or even relevance, as the innovation around components elsewhere shows no sign of slowing down.


You could argue this article is a little unfair to Web Components as it sounds like I expect them to be the answer to all these problems. I don’t. Many of them are entirely outside their scope.

I found myself drifting into the world of CSS and dependencies quite often as they seemed the most pressing to me. There are also many more questions I have that I didn’t write about and I’m sure more questions that those reading this will have. It’s important these are exposed.

The requirement for Web Components to be independent and portable is ultimately what brings these discussions to the forefront. Their benefits are real: true scoping for CSS, the ability to share code in a framework agnostic manner, their composability, are all features that developers have needed for a long time.

Clearly the Web Components spec does not exist within a vacuum. It does present us with a focal point though, an opportunity for reflection on how the rest of the web ecosystem is ready to support it, and a list of outstanding problems which require resolutions to allow us to build standards-compilant, truly composable systems.

What we would change about Rizzo

I've heard recently that a few different companies have created a version of Rizzo to build on the work we have done at Lonely Planet. It's extremely flattering to hear about our ideas helping other organisations. It also makes me a little nervous because if someone were to copy Rizzo as it exists now they would inherit some of the decisions which ultimately we would change in hindsight (and plan on doing in the future). This article outlines some of those decisions and how we would approach them differently.

I should mention that the fundamental principles behind Rizzo as a component library and Maintainable Style Guide still stand true and those haven't changed since its inception. That said, there are definitely certain decisions which I would love to have the opportunity to revisit and which I would advise anyone starting down this path to consider.

1. Fix up your filesystem structure

Grouping files together by component makes it considerably easier for developers to find files and makes the boundaries of the component clearer.

// The old way

// A better way


We inherited Rails' directory structure which organises assets by file type and this is also a long running convention used in many frameworks including the html5 boilerplate. Despite this, it doesn't make any sense to continue organising files by their file type and this is a best practice which should definitely be challenged.

In the example above, header.yml would include stubbed data used to render the component in the Style Guide.

Using this form of filesystem convention also greatly simplifies point 2.

2. Let the component manage its own assets

As soon as you get on board with the idea of components you realise it makes no sense for a ‘page’ to define the dependencies of the components it will render. A page should have no knowledge, nor care, which components are used to construct itself. Even so, this is still typically how we manage assets: using an application level manifest file which imports the dependencies for each component. Think of any classic application.sass which imports all the partials for that page for example.

The main area in which the old technique falls down is when you begin lazy-loading in components. Keeping stock of which components will be loaded into the page template creates a very brittle system and it makes much more sense to let the component fetch its own dependencies, teardown and initialise itself.

Of course this isn’t without its own challenges. You still need to architect a system whereby lazy-loaded components can be instantiated and you must also consider how to bundle dependencies by page, or ‘entry point’. Fortunately there are a collection of tools which are designed to help you with the latter process - Webpack, assetgraph, jspm, browserify to name a few.

3. Use a language agnostic templating syntax

A misconception of Rizzo is that it is limited to the stack we wrote it on (Ruby). This is somewhat true as the components are primarily written in Haml, which is tightly coupled to Ruby. We do also expose certain components via http endpoints and send them over the wire, making them accessible from any stack.

What is definitely true, however, is that had we chosen Mustache or another language-agnostic templating language we would have created a more flexible system to work with and reduced the number of hoops we inevitably have to jump through when using Rizzo on different stacks. This also has the advantage of making them available to the client without the need for writing any adaptors.

4. Deploy updates automatically across applications

We deploy Rizzo as a gem within each of our applications. A consequence of this is that we need to explicitly bump the Rizzo dependency in each app after we make an update to a component (in practice this happens organically unless it’s a significant change). This is a safe way of managing our components but as we've scaled up the number of applications it is becoming more tedious.

Rizzo serves some components (header, footer) over http to our legacy apps. In this method, the changes are updated automatically as soon as we deploy Rizzo. This removes any update process entirely but does introduce a risk that we could break these applications (which of course we have, on occasion). This doesn't fit in to our model of continuous delivery at Lonely Planet and is not something we would rush to do on a broader scale, however GOV.UK have recently written about their implementation which does follow this pattern. It's an exciting project (and awesome that such a project has taken something from our work) and I look forward to seeing how that evolves. Given the smart engineers over there I’m sure they have something in place to de-risk the process.

I hope we at some point look into updating Rizzo automatically across our apps at Lonely Planet. I would still prefer us to keep the same system of being declarative about specific dependencies and ensuring that we run the full suite of tests for each application. Either way, you have a set of trade-offs to balance.

My ideal process would be to have a post-deploy hook on Rizzo that triggers builds of all applications that use it and, if successful, deploys them. This would create a lot of noise in our CI environment though, and would need to be thought through a lot more before we actively pursued it. Likely this solution will wait until the current process gets too painful to support.

Are you using a component API like Rizzo?

If so I’d be keen to hear about it! Have you made any changes similar to the above which others could learn from?

CSS at Lonely Planet

Inspired by Mark Otto's post Github's CSS I thought I would quickly jot down how Lonely Planet’s CSS is structured. I thought it was interesting to read some of the parallels and it’s good to share how we work.

Quick Facts

  • We write in Sass (Indented syntax).
  • We have more than 150 source files.
  • The compiled CSS is split into two stylesheets to allow for stronger caching across apps.
  • The average weight of CSS per page is around 35kb (gzipped).
  • Rems and pixels are the unit of choice, with scattered ems.


When I joined Lonely Planet we were already using the indented Sass syntax and have stuck with it since. Having used it for so long, writing SCSS seems like a chore.

Whilst we use Rails, we compile our Sass without Sprockets and just use Sass’s @import functionality to build up stylesheets.

Our use of Sass’s features is pretty low, mostly limited to variables and a few mixins. We originally started out with an architecture favouring extending placeholders over classes in the dom though this gradually caused our codebase to become too complex and we reverted our course to a more OOCSS approach.

We use autoprefixer to handle vendor prefixes and I encourage everyone to do the same. We don't use Compass or any other plugins.


  • We use a version of BEM to distinguish between components and prevent style collisions.
  • We take a rough approach towards OOCSS. We started out with good intentions there but haven't stuck to it religiously.
  • We don't use IDs in CSS. We very rarely style anything but classes.
  • We use normalize.css.
  • We avoid styling elements and scope all our typographic styles to classes. Typographic elements don't get margins by default as it leads to too much overriding (in our design).


We don't use any CSS frameworks. If we were to begin again I would be tempted to use something like Inuit.css although ultimately I like the fact that we have no dependencies and are in complete control of our CSS.


We don't lint our CSS. It’s something that we should look into.


Our CSS is distributed in two files:

  • core.css
  • application.css

Core is cached across the entirety of lonelyplanet.com whereas application.css is cached only within the specific application. Lonely Planet is served by more than 10 distinct applications so having this separation is crucial for us to render faster pages.

Core includes the base styles like fonts, grids and header/footer styles, and also includes some of our most commonly included component styles which we choose to cache across all the apps. These components and styles live in Rizzo which is accessible by all apps.

Application.css will include styles distinct to the specific application, as well as some Rizzo components which aren't used often enough to be included in core.css.


The above bundling is key to our CSS performance as is keeping the files small themselves. We have a performance monitoring section in Rizzo which trends file size changes. Currently it only trends for seven days as this is a new addition to Rizzo and we are still collecting data.

CSS Performance Trending

We collect this data every few hours using a few simple scripts which also allows us to run analysis on the stylesheets. We do this with Stylestats and again render the breakdown in Rizzo.

CSS Analysis


I've written previously about our Maintainable Style Guide, Rizzo and it works very successfully.

We also self document our Sass by wrapping it in \[doc\]..\[/doc\] tags, and then statically analysing it. For example, this utility_classes.sass file creates this documentation in Rizzo.

CSS Documentation


Similarly to github, we like to get rid of as much code as we can and we’re not precious about keeping things around in case it might be needed. Refactoring is a part of our daily work though and we very rarely have specific refactoring tasks.

Other CSS Files

Our SVG icons and fonts are both loaded within CSS files, but these are deferred and not grouped with the rest of the styles.