Back

Building on the modern web

and whether the complexity is worth it.

Level up your node_modules.

We offer certified JavaScript to help you enhance your JavaScript with even more JavaScript.

Go down the rabbit hole

About the project

I was tasked with creating a draft CMS (content management system) for a local business. Their previous system consisted of a rigid templating engine that didn't fit their elastic needs, and took around 4 seconds to render a new page.

I knew I wanted to try something different. The new system had to be dynamic, quick to iterate on, and fast for the end-user. Because of their specific needs, I couldn't easily integrate an existing headless CMS solution, so I settled with writing my own.


a history lesson

Type your credentials to login

Username:
Password:

A simpler time.

To understand what goes into a modern website, it's best to go back to the basics. At the start, browsers would render static markup sent by the server. It might have had superficial animations, but ultimately, once sent from the server, the content could not meaningfully change unless the client communicated back to the server either through a link, or through submitting a form (like the one above). This meant the page would fully refresh, flashing white between transitions, waiting for a server to generate the full HTML for the page each time.

The next revolution came with the browser being able to replace parts of the web page with new content after the page had already loaded. This allowed for websites to become dynamic, feeling less like "pages in a browser", and more like applications (Steve Jobs famously mentioned AJAX as a feature in the first iPhone's version of Safari.). This was the main way the web worked for a while, and is still popular in some circles to this day. For larger applications, however, imperatively poking at the page with AJAX became a bottleneck, and complexity quickly grew out of hand. Developers sought a way to separate UI state from UI rendering, deriving the latter from the former automatically through a declarative paradigm.


Moving to the client

loading chunk 2.8.js...
Loading...

A visualisation of waterfalling, where one component fetches to figure out which component to render, which also fetches to figure out which of its children to render, which also fetches until...

The next era of the web was the era of client-side rendering. The server sent a blank page, and a link for the browser to download a JavaScript framework that would take over and communicate with the server via JSON, turning serialised data into markup the browser could display to the user declaratively.

It no longer felt like you were using a traditional "website". Navigation was instant. Transitions felt "app-like". It blurred the gap between native and web, allowing the browser to become an application platform as opposed to just a document viewer.

At least, that was the intention. In practice, it was less than ideal for performance. The entire web infrastructure was optimised over decades for using HTTP (hyper-text transfer protocol) to send HTML (hyper-text markup language). Client-side frameworks flipped this paradigm on its head. Browsers got blank pages from the server and needed to download and parse large JavaScript bundles before they could show the user something. State needed to be duplicated and sent asynchronously through a traditionally type-unsafe JSON layer, juggled and kept in sync by the frontend, and tracked and diffed accordingly to produce the correct HTML, entirely on the client.

A common artefact of this process is waterfalling - as components decide they need to fetch before they can run before they mount their children, which decide they need to fetch again before they can mount their grandchildren - so on, so forth, sometimes going so far as 5+ layers deep - each one with a network round-trip.

Client-side rendering is still preferred for highly dynamic applications (like Figma) with lots of dynamic state that needs to live on / sync with the client, but for something like a content management system, most of our state lives in the server - resulting in more complexity just to duplicate work unnecessarily. Wouldn't it be nice if we had the best of both worlds?


The best of Both Worlds

The modern paradigm revolves around the concept of progressive enhancement. The server renders a static "shell" of your web page, with dynamic data fetched from the database, and sends it to the user as the first thing they receive when they visit. In the background, it silently downloads pieces of your framework of choice to "hydrate" the static shell, resulting in an experience that loads as quickly as a server-side site, but with the dynamism of a client-side application. Furthermore, you can choose to cache pages ahead of time all over the globe through a CDN (content distribution network), making future navigation instant.

For seasoned developers, this sounds like the modern re-inventing of what was already possible with languages like PHP, C#, or Ruby (for the static shell) and simple "plain" JavaScript (for the client-side enhancement). Modern flavours of these tried-and-true frameworks have caught up to the modern ecosystem, offering ways of integrating client-side frameworks for the dynamic parts of your website, while still using your favourite language for the static shells (e.g. Inertia for Laravel/PHP). So why would you use a full-stack JavaScript framework for this?


The benefits of end-to-end type safety

The main reason I chose a full-stack JavaScript framework instead of one of the many other languages with full-stack ecosystems is because keeping it all in the same language allows you to share the types between the front-end and back-end natively - with no cross-language code generation or context-switching involved.

One thing I had to constantly do was redesign according to client feedback - including the need to regularly refactor many moving parts at once to fit the client's evolving needs. Normally, a refactor would involve carefully removing the unneeded code as to not break anything, run a battery of tests to make sure, integrate the new features, making sure not to change too much in case you break something else, and then testing even more to make sure the new works, the old still works, and there isn't anything undesired left over. It's certainly a strategy - but it's incredibly slow.

With end-to-end type-safety, and particularly while developing a CMS, the workflow is enhanced significantly. Since we're working with content, it can be represented through the type system elegantly, and you can be sure that a change in a database schema or backend CRUD operation won't affect your presentation layer - because it's all verified by the compiler. No more undefined or [object Object] reaching the end-user.

This is further enhanced with my ORM of choice, Prisma. It extends the type-safety to the database layer, as well. You define your database schema declaratively, and Prisma generates the types for your database models directly in your project. A common refactoring workflow for me would be to edit the database layer first, then fix the type errors in the backend, then fix the resulting backend/frontend conflicts. It bubbles up the chain to reduce what used to be a maddening cat-and-mouse game to a simple tooling-driven find-and-replace job.


The "next" era of the web

There are many flavours of full-stack JavaScript, but by far the most popular one is Next.js. Next.js started life as a performance-oriented meta-framework for React, offering better core web vitals for your client-side applications by essentially rendering the same website twice - once on the server, and once on the client.

While it has evolved into a smarter framework with the ability to render parts of the page exclusively on the server, streaming to the client with prefetching for instant navigation, it still maintains a performance oriented focus with deep integration into the infrastructure side of the site, as well as the presentation. Next.js offers a multitude of features that suited my needs.


The serverless paradigm

Since a CMS mostly involves CRUD operations (create, read, update, delete), it's a perfect candidate for serverless hosting. The idea is simple - do you really need a server running 24/7 just to occasionally render a page and perform a database operation? With Next hosting most of our pages on the CDN anyway, the actual server is involved in less and less. We can forgo a long-lived instance and instead spin one up instantly to respond just for the request, dying immediately afterwards.

This requires a backend written in a way that's pure, with no state stored long-lived server memory - it's either persisted in the database, or ephemeral (forgotten when the serverless instance dies). This isn't ideal for certain types of applications, but for a CRUD application it's actually exactly what I want. State only lives in one place, and transformations flow one way, making debugging and reasoning about data flow much simpler. This purer approach isn't exclusive to serverless, but it works especially well with it.


The downsides (of my choices)

These are more grievances with this particular stack than with the web as a whole, since I've gotten to use this stack for a professional, fully integrated system:


Conclusion

The web ecosystem has evolved into one of the world's biggest software distribution platforms, from its humble roots as a way of sending research documents over phone lines. With any system that has lived this long, and maintained this level of backwards compatibility, there's bound to be baggage that has persisted - even with modern technologies.

Despite this, the browser has only become more versatile than ever. A simple document viewer has become a secure application distribution platform, with exceptional performance considering its constraints. The future of the web, especially with technologies like WASM and webGPU, is only getting brighter - and I can't wait to build on it myself.


Read more about my attempts at deriving UI from first principles.