Progressively enhanced Javascript

Using Javascript to design progressively enhanced interfaces is probably the most important yet, misunderstood subject in web development.

In this article we’re going to discuss these misunderstandings. Then, we’ll explore cutting-edge techniques that have stood the test of time but that we’ve long forgotten.

“The problems we have with websites are ones we create ourselves”
Motherfuckingwebsite.com

By default, the web is accessible to everyone. That’s the web’s super power. It’s us designers and developers that take this natural super power and lace it with kryptonite, hurting users in the process.

Most of us care about users, but most of us also fail in execution. Before we get to why let’s first define Progressive Enhancement.

Progressive Enhancement ensures we give everyone a useful experience; Then, where necessary, creating a better, enhanced experience for those using a more capable browser.

Progessive Enhancement applies to HTML and CSS too. But Javascript is where most of us struggle. We don’t seem to know how to write Javascript in a progressively enhanced way.

Progressive Enhancement myths

Whilst there are many myths about Progressive Enhancement I want to point out a couple of things.

First, unobstrusive Javascript (placing Javascript in external files), does not, in anyway, mean it’s progresively enhanced.

Second, this is not specifically about people who disable Javascript. Yes, some people do disable it but it’s not the main problem. Everyone has Javascript, Right? shows the many points of failure.

The last of those points—using Javascript that the browser doesn’t recognise—is the main thing we need to discuss. Javascript—unlike HTML and CSS—doesn’t degrade gracefully without intervention.

For example, <input type="email"> degrades gracefully into a text box. And border-radius degrades gracefully by not showing rounded borders. No problem here.

Javascript, however, will error when the browser tries to execute code it doesn’t recognise. Internet Explorer 8, for example, breaks on the last line of code:

var form = document.forms[0];
form.attachEvent('submit', function() {
  window.event.returnValue = false;
  var foos = document.getElementsByClassName('foo');
});

This is because it doesn’t recognise getElementsByClassName. The page didn’t degrade nor did it fully enhance. The script intercepts the form’s submit event, but breaks in the process giving users a broken interface.

This means when the user submits the form, nothing happens. Handling submit on the client (enhanced experience) to save a round trip is fine. Handling submit on the server (core experience) is also fine. But above, the user doesn’t get either of these.

Neither the browser nor the features in this example are relevant. It could be any browser and any feature. It makes no difference how new a browser is or what (cutting edge) feature it supports.

What shouldn’t we do?

It’s often useful to explore problems with other solutions. Often it’s easier to deduce what it is we should do, once we find out what it is we shouldn’t.

Don’t ignore the problem exists

I struggled for a long time with this. I always thought about the current set of browsers that particular project had to adhere to. But just because I ignored those who use ‘other’ browsers, doesn’t mean they don’t exist.

Don’t hand off responsibility to third party libraries

When we put that script in our project, it becomes our responsibility. We should check under the hood for quality and watch out for the typical multi-browser approach that opposes the principles of Progressive Enhancement.

When a library releases a new version, they often drop support for various browsers. This is a never ending cycle. This is what Jeremy Keith means when he refers to the web as a continuum.

Don’t rely on Cutting The Mustard

Cutting The Mustard (CTM) is a relatively new approach which poses as a reliable solution and one that is based on giving users a core or an enhanced experience. It’s the implementation itself that is problematic.

if(document.querySelector && window.addEventListener && window.localStorage) {
  // start application
}

The script detects a few choice methods, to then infer that the browser is ‘modern’. This is impossible because of the sheer amount of new (versions of) browsers being released every day. And it’s irrelevant because the release date doesn’t determine capability.

If the check passes, the application starts and attempts to give users the enhanced experience (whatever it may be). Inferences are almost as frail as user agent sniffing, something that Richard Cornford explains in Browser Detection and What To Do Instead.

Specifically, CTM has the following problems:

What is the solution?

Like Jeremy Keith, I’ve always maintained that, given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time.

To give users a core experience, we must ensure the interface works without Javascript. This is because that’s the experience they’ll get, when the browser doesn’t recognise the method.

After this, we must detect, and where necessary, test all of the features the application references before the application uses them. This ensures the page doesn’t end up half enhanced and therefore irrevocably broken.

To do this reliably we need to use wrappers (facades). The library should expose a dynamic API that adapts to the browser. This is how it might look:

if(hasFeatures('find', 'addListener', 'storeValue')) {
  var el = find('.whatever');
  addListener(el, "click", function() {
    storeValue('key', 'value');
  });
}

Notes

Engineering a dynamic API

Peter Michaux’s Cross-Browser Widgets has a detailed walk-through all in one article. However, let’s create a little something here and now.

The application needs to add a class to an element. First we’ll create that function and expose a dynamic API:

// use isHostMethod
if(document.documentElement.classList.add) {
  var addClass = function(el, className) {
    return el.classList.add(className);
  };
}

Then the calling application checks that addClass is defined before referencing it:

if(addClass) {
  addClass(el, 'thing');
}

The application is blissfully unaware that it only runs in browsers supporting classList. Those using Internet Explorer 9 and below get the degraded experience and that’s okay. If you want to support Internet Explorer 9, add a fork:

var addClass;

// Use isHostMethod
if(document.documentElement.classList.add) {
  addClass = function(el, className) {
    return el.classList.add(className);
  };
} else if(typeof html.className === "string") {
  addClass = function(el, className) {
    var re;
    if (!el.className) {
      el.className = className;
    } else {
      re = new RegExp('(^|\\s)' + className + '(\\s|$)');
      if (!re.test(el.className)) {
        el.className += ' ' + className;
      }
    }
  }
}

This implementation supports almost every browser and the application didn’t need to change. In future you can remove this fork once the number of visitors diminishes to a suitable level.

That is something only you can determine. Either way, users get a working experience, proving that a library never needs to drop browser support, at least not in the traditional sense.

Summary

Progressive Enhancement puts users first. Misunderstanding the application of Progressive Enhancement puts users last.

Progressive Enhancement is not more work, it’s less work. We don’t have to endlessly play catch up with browsers. We don’t have to give users a broken experience.

Instead we can write backwards compatible and future proof code that creates robust and inclusive experiences for everyone.

I write articles like this and share them with my private mailing list. No spam and definitely no popups. Just one article a month, straight to your inbox. Sign up below: