The modern JavaScript ecosystem explained for oldschool developers

This post is for those of you who haven’t been able to keep up with the rapid developments and are still in a project where jQuery was the main driver. Say the late 2000s to 2012. Back in the day, JavaScript was just that, a scripting language which you could drop in a browser and it would execute it. You got quality-of-life methods and cross-browser compatibility by using jQuery. It was a simpler time. Nowadays there are thousands of tools and frameworks being used to develop things which were formerly known as “websites”. I will show you what is what and why you need it.

Languages – Please misunderstand correctly

Examples: CoffeeScript, TypeScript, JavaScript, Dart

Let’s start with a simple one: Languages. There is more than JavaScript nowadays but at the end of the day, they all will become JavaScript so the browser can execute them (WebAssembly is left out of this post entirely to not completely overload it). The main goal of all these languages is to deliver a better developer experience.

ECMAScript is a blueprint/standard for a multi-purpose language and not a language itself. An implementing language can decide which version of the standard it chooses to implement. JavaScript is an implementation of ECMAScript.

CoffeeScript is an alternative language for the web. Its goal is to eliminate the Java part of javascript by providing a different syntax and to “expose the underlying beauty” of JavaScript. They got rid of all the curly braces.

TypeScript was developed by Microsoft in an endeavour to make JavaScript more scalable – more usable in large-scale projects. TS is a superset of JS which means all valid JS is automatically also valid TS. It offers static typing with very powerful type-inference features to safeguard you from runtime errors by having the compiler check your code when it turns it back into JS. It also supports many of the draft-stage ECMAScript features to give future features right now.

JSX originally was created for the React framework but it is a standalone product and usable by any other framework (Solid JS uses it for example). Is syntactic sugar to create JS objects in a more declarative way. At the end of the day transpiles down to function calls. Think XAML for the web.

JavaScript runtimes – Interestingly they don’t have legs

Examples: Browser, Node, Deno

Their job is to run JavaScript code and expose APIs to the JS context. They support different ECMAScript versions.

Browsers expose Browser-APIs like the DOM, LocalStorage and fetch.

Node/Deno exposes operating system APIs like the filesystem, sockets and processes.

Transpilers – Translators for computer languages

Examples: The TypeScript compiler, Babel, the CoffeeScript compiler

Simply put, transpilers take input in Language A, version x and output Language B, version y. Language A and B may be the same language but the version differs or input and output might be the same entirely in which case you configured something wrong probably.

Examples are .ts (ES2023) to .js (ES5), .coffee to .js (ES2017), .js(ES2023) to .js (ES6) or a JSX expression in a .js file to a .js file that is actually executable in the browser.

If you are now thinking what the purpose of taking a .js file and turning it into another .js file is: While I’m coding I probably want to use a new language feature (like async and await) but the runtime only supports an older version of the ECMAScript standard where that feature did not exist. In that case, the transpiler takes my code and converts it into an older version that is supported by the runtime. For example: take async/await and turn it into promise chains.

Polyfills – Because I need a touchscreen on my 3310

Examples: Babel, core-js

Just like transpilers enable you to use newer language features in older runtimes, polyfills allow you to use newer APIs in older runtimes by monkey-patching them.

Runtimes implement their APIs in a lower-level language like Assembler, C or Rust and then expose this functionality to the JS world. But if I need to I can also implement this functionality in pure JS and expose an identical interface. When the fetch API was introduced, it made doing HTTP calls much more streamlined, but IE11 which was still popular did not support it. So people decided to implement it using the available XMLHttpRequest under the hood. In a nutshell, they checked if window.fetch was defined and if not, they just added the function to the window object and translated the fetch call to an XMLHttpRequest call.

I also want to mention Ponyfills which can also retrofit functionality but do this without polluting the global scope.

Dependency Management – that v0.0.3 library looks production ready

Examples: npm, yarn, bower, pnpm

Managing your dependencies is easy. You just go to the homepage or GitHub repo of the library you want to use, find a production artefact to download, pop it into your web server’s public directory, reference it via a script tag and off you go. That was kind of how it worked.

If you are in a project where dependencies are managed this way, you are aware of the shortcomings: The import order matters as some library may have dependencies on another, you don’t get notified if updates are available, your libraries are checked into source control and if you decide to do an update it might break because one dependent library does not yet support the new version. This type of copy-paste dependency management was improved by some tools but it was still lacking.

With the rapid adoption of Node, its dependency management tool npm became the de-facto standard, also for the front end. There are others, some of which still use npm under the hood, like yarn. It took away a lot of pain by stating dependencies in a declarative fashion in its package.json file, handling transitive dependencies, updating notifications and even security warnings. It’s a good tool in general.

The problem is that npm initially stood for “Node package manager” and that’s exactly what it does to this day. It manages dependencies for Node, not the browser. To make most packages work in the browser you need another tool which we will discuss later: a bundler. Some packages work in the browser out of the box, but it’s the exception rather than the norm.

Build Systems – I’m sure if I just do this by hand I won’t forget any step

Examples: npm scripts, Grunt, Gulp, Webpack, EsBuild

A modern website undergoes a plethora of surgery from what your dev environment sees to what’s running in production. To initiate all those build steps in the correct order at the correct time is the job of a build system.

Typically you will see tasks like CSS preprocessing and minification with SASS or LESS, minifying/optimizing images, transpiling, obfuscating and minifying application code, transforming config files, loading polyfills and bundling the application.

There is quite a range of offerings. You can start simple by just invoking the commands in npm scripts. Then maybe you want a task runner of some sort like Grunt or Gulp to tie all the stuff together. In many projects, this task has just been delegated to the bundler, which will orchestrate the work while creating the application bundle.

Module Systems – For when you hit Notepads file size limit of 45KB

We do not want to pollute global scope, only make public APIs available to other code and state dependencies explicitly. IIFEs were used a lot and are the basis for many of these to work.

Node came with a module system of its own: CommonJS, which exposes the require(...) function to import packages and the module.exports object to export functionality from a package. It’s not compatible with browsers as we already found out when learning about npm.

For browsers, AMD came along. It stands for Asynchronous Module Definition and exposes the define(…) function. For it to work you need to have a module loader. A widespread module loader was require.js, which has nothing to do with Nodes require(...) function. Confusing, yes.

To bridge the gap and make a module that is usable in any environment (plain browser, Node, browser with require.js) you can write a module using UMD, which stands for Universal Module Definition. When doing so you probe at runtime in which environment your code is running and then do the right thing.

In ES2015 finally, we got a standard that has evolved a lot since and is supported by all major runtimes (Browsers, Node). It introduced the import and export keywords for static module loading as well as the import(...) function for dynamic module loading. If you are in a Webworker environment, you don’t have these available, use the importScripts(...) function instead.

Bundlers – We make it run in a browser since 2012

Examples: Webpack, EsBuild, Parcel, Rollup, Snowpack

We already established that there is a problem with using node packages in the browser: It simply will not work. If you use the require(…) function, well that does not exist in the browser world. Even if you use the new import keyword, Node does package resolution differently than your browser. See a browser needs an absolute or relative URL to the file you are trying to import, Node uses the package name.

This is fine in Node, but your browser won’t like it:

const _ = require("lodash");
//or
import * as _ from 'lodash';

What your browser expects is something like this:

import * as _ from '../node_modules/lodash/dist/esm/lodash.js'

But that won’t work either because the Lodash package does not include an ES-Modules build.

Another big problem with this is that the one library you use can depend on a hundred others, each with hundreds of source files. That means the browser will have to download 10.000 files for your JavaScript code to run. Not optimal especially because head-of-line blocking is still a problem.

That’s why bundlers exist. They do the dependency resolution at build time and they do understand and support the Node way of finding packages. They will then produce one or more JavaScript file that has no dependencies on other files anymore, the so-called package. As it has no more dependencies, you can directly import it with a script tag and it will just work.

They also do extra stuff like enable you to import files that you can’t import using just the browser like JSON files, HTML, Markdown, CSS and even image files. Also, they do a thing called tree-shaking which removes all the JavaScript code that is never executed resulting in smaller files being shipped to the client browser.

I also put Webpack into the build systems category. That is because it will also do a lot of tasks a build system does like compiling, preprocessing and so on. Bundlers are the be-all end-all tools in the modern JavaScript world and will remain relevant in the future. I say they do too much nowadays but hey, as long as it works I’m not too sad.

Scaffolders – It’s all just a facade

Examples: create-react-app, angular-cli, vue-cli, webpack-cli

All modern JavaScript frameworks come with a command line tool. It’s for convenience.

The first thing you need this for is to set up your project structure and configure all the tools for the build and debugging process. I’m pretty sure you can grasp how tedious it would be to do this setup manually after learning about all the things involved in JavaScript development these days.

During development the tool is used to orchestrate all the parts needed for rapid feedback development: Starting a development server, setting up file watchers which trigger a build on every change, auto-reloading the browser after the build is done and making sure breakpoint debugging works even if the code in your editor does not resemble what’s running in the browser at all (source maps are used for this purpose). It can also be used to create new application components if doing that manually is too cumbersome because some config files need to be updated. Then, at the end of the development cycle it is used to initiate production builds to be deployed and served to customers.

You can see those tools as a facade aiming to streamline the development experience and provide a coherent interface, even if the underlying components all have different interfaces. It’s a facade making your life more convenient.

Photo by Alexander Hafemann on Unsplash