Chapter 6. Applying the JAMstack at Scale

A Case Study: Smashing Magazine

Smashing Magazine has been a steady presence in the frontend development space for more than a decade, publishing high-quality content on themes like frontend development, design, user experience (UX), and web performance.

At the end of 2017, Smashing Magazine completely rewrote and redesigned its site to switch from a system based on several traditional, monolithic applications to the JAMstack. This chapter showcases how it solved challenges like ecommerce, content management, commenting, subscriptions, and working with a JAMstack project at scale.

The Challenge

In its 10 years of operation, Smashingmagazine.com has evolved from a basic Wordpress blog to a large online estate, spanning multiple development platforms and technologies.

Before the relaunch, the main magazine was run as a Wordpress blog with thousands of articles, around 200,000 comments, and hundreds of authors and categories. Additionally, the Smashing team ran an ecommerce site selling printed books, ebooks, workshops, and events through a Shopify store with a separate implementation of its HTML/CSS/JavaScript theme. The magazine also operated a Ruby on Rails app as its job board (reimplementing the Wordpress theme to work in Rails, with Embedded Ruby [ERB] templates) and a separate site built with a flat-file Content Management System (CMS) called Kirby (which brought an additional PHP stack, not necessarily in step with the PHP stack required by the main Wordpress site) for handling the online presence of Smashing Conference.

As Smashing’s business grew, its platforms could not keep pace with its performance requirements; adding new features became unsustainable. As the team considered a full redesign of Smashing Magazine together with the introduction of a membership component that would give members access to gated content and discounts on conferences, books, and workshops, there was a clear opportunity to introduce a new architecture.

Key Considerations

From the project’s beginnings, Smashing Magazine sought to enhance the site with a better approach to templating; better performance, reliability, and security; and new membership functionality.

Duplicated Templates

All themes had to be duplicated among Wordpress, Rails, and Shopify, each of which adds its own individual requirements to the implementation; some could be built out as more modern and frontend friendly, with a gulp-based, task-running workflow. Some had to be in standard Cascading Style Sheets (CSS) with no proper pipeline around it. Some needed to work within a Rails asset pipeline where making changes to HTML components involved changes in PHP, Liquid template files, and ERB template files with different layout/include structures and lots of restrictions on what HTML the different platforms could generate. To be maintainable, and to promote consistent results across the site, a single, common approach to templates and their build pipeline would need to be established.

Performance, Reliability, and Security

Smashing Magazine went through constant iterations of its caching setup to avoid outages when an article went viral. Even after considerable efforts, the site still struggled to reach acceptable uptime and performance goals. This effort introduced a great many plug-ins, libraries, and dependencies for Wordpress, Rails, and Kirby, resulting in a constant uphill battle to keep numerous Wordpress plug-ins, Ruby Gems, and other libraries up to date without causing breakage to themes or functionalities. To achieve a suitable level of performance, in a responsible, robust, and manageable way, this architecture had to be simplified and consolidated with Smashing Magazine taking ownership of fewer languages and technologies.

Membership

One of the important goals of the redesign was the introduction of the Smashing membership, which would give subscribers benefits across the platform, such as an ad-free experience, discounts on books and conferences, access to webinars and gated sections of the site, and a member indicator when posting comments. However, each of the applications making up Smashing Magazine were monolithic systems that applied a different definition of user. For membership to be supported and for the different parts of the Smashing Magazine ecosystem to be sustainable, a unified definition of a user would need to be established.

Picking the Right Tools

Taking into account the key considerations, Smashing Magazine began exploring an architecture in which the frontend lives in just one repository and one format, all content pages are prebuilt and live on a CDN, individual microservices handle concerns like processing orders or adding comments and identity of users, and members can be delegated to one microservice. The JAMstack.

The first step in the process was to build the right core tools, particularly the core static site generator, a frontend framework for the dynamic components, and an asset pipeline to make it all work well together. After these were established effectively, this set of core tools would enable the use of any number of microservices.

Static Site Generator

To build out all articles, categories, content pages, author pages, event pages with speaker and talk details, as well as RSS feeds and various legacy pages, Smashing Magazine needed a static site generator with a very flexible content model. Build performance was also a key constraint because each production build would need to output tens of thousands of pages in total.

Smashing Magazine evaluated several static site generators. It explored tools like Gatsby and React Static that included an asset pipeline and are based on Webpack. Those provide straightforward support for writing pure, client-side components in the same language as the templates for the content-based site. However, the performance when it came to pushing out tens of thousands of HTML pages in every build was not quite there.

Ultimately, the need to achieve a rapid build time combined with the complex content model made Hugo the first choice. It was an ideal combination of performance and maturity.

Hugo can generate thousands of pages in seconds; has a section-based content model; a taxonomy system that allows many forms of categories, tags, related posts, and so on; and a solid pagination engine for all listings. Many of the JavaScript-based generators are working to provide better support for incremental builds and large-site performance, but at the time of this writing, none of them could compete with Hugo in terms of its build speed, and they all fell short of the performance that Smashing felt comfortable with for a site of this volume.

There are potential workarounds to the challenge of needing to generate a very large number of pages in the build, like only prebuilding some of the content and fetching the rest client side from a content API. But because Smashing Magazine is in the business of publishing content, serving that content in the most robust and discoverable way would be essential. To avoid disrupting the search rankings of older reference articles, it was decided that content should be prebuilt and ready on a CDN without the need for any client-side JavaScript for rendering core content or basic browsing. Hugo was the only tool that was fully geared to meet the challenge with this amount of content.

Asset Pipeline

A modern asset pipeline is what allows us to use transpilers, post-processors, and task runners to automate most of the work around actually building and delivering web assets to the browser in the best possible way.

While many static site generators come with their own asset pipelines, Hugo does not. The Smashing team would need to create its own if it wanted support for bundling, code splitting, using ES6 with npm, and SCSS for CSS processing.

Webpack has become more and more of a standard for handling the bundling, code splitting, and transpilation of JavaScript, and it felt like an obvious choice to handle all of the JavaScript processing on this project. However, when it comes to preprocessing SCSS to CSS, Webpack is better suited for use with projects designed as single-page applications (SPAs) than for content-based sites like Smashing Magazine. Webpack’s tendency to center around processing JavaScript and “extracting” the CSS referenced in the JavaScript modules makes it less intuitive to use on a project such as this and better suited for use with tools like Gatsby where assets are always referenced from JSX files.

The efficiency of the CSS workflow can have a huge impact on developer experience and the success of a project like this. At the beginning of the project, Sara Soueidan, a talented Smashing Magazine developer with extensive CSS skills, had built a CSS/HTML pattern library that would form the basis of the site. It was important to ensure that she could work with the same setup as the final site so that she could simply start building out pages using the same partials and as the pattern library.

To make this work, Smashing connected with a team versed in creating open source projects to support the viability and growth of the JAMstack. That team built out a small framework called “Victor Hugo,” “A boilerplate for creating truly epic websites!”

Victor Hugo is based on a core of Gulp as a task manager, orchestrating CSS compilation via Sass, and Hugo builds whenever a template changes, and processing JavaScript via Webpack whenever a source file is updated. It also offers us a general task runner for maintenance tasks and similar automated processes along the way.

Frontend Framework

The next step was to introduce a frontend framework. Your first thought at this point might be “why?” Wouldn’t just using progressive enhancement with a bit of vanilla JavaScript be good enough for a content-driven site like Smashing Magazine?

For the core experience of the magazine—the shop catalog, the job board, event pages, and so on—this was indeed the approach. The only JavaScript involved in this part of the project was built with a progressive enhancement approach and used for things like loading new articles inline or handling details like triggering pull quote animations on scroll and so on.

But Smashing Magazine is far more than just a magazine, with features like ecommerce, live search, commenting, and sign-up flows all requiring dynamic user interface (UI) elements. Without a rigorous, well-organized, and logical architecture and development approach, such complexity could quickly result in a nightmare of unmaintainable spaghetti. An effective, modern frontend framework needed to be in place in order to allow the development team to execute as planned. Being unimpeded by the compromises and overheads typically encountered when integrating with traditional monolithic platforms would be a huge boon for the development process, resulting in a far more logical and maintainable codebase.

At the same time, Smashing Magazine is not an SPA that can better afford to load a massive JavaScript bundle on first load to rely on in-browser page transitions from then on. A content-driven site needs to optimize for very fast performance on first view for all content-based parts of the site, and that means the framework must be tiny.

Preact fits that bill! It’s a 3 KB component framework (gzipped) that’s API-compatible with React. In comparison, jQuery weighs in at around 28 KB (gzipped).

This brought a modern component-based framework with the familiar interface of React to the table. Using Preact, the team could build interactive elements like the flows for ecommerce checkout or sign-up/login/recover password processes entirely client side—and then use Ajax to call specific microservices when an order is triggered or a login form is submitted.

We dig into those details shortly, but first it is worth turning our attention to the content that Smashing Magazine would be serving through this web experience.

Content Migration

In the previous version of Smashing Magazine, the content lived in several different binary databases. Wordpress had its own database in which most content was stored in a wp_posts collection with references to metadata and author tables. Shopify had an internal database storing books, ebooks, its categories, and a separate set of authors for those. The job board stored data inside yet a third database with a custom schema.

The numerous and distinct databases were a cause for concern and the root of many obstacles for Smashing Magazine.

The first job was to get all this data out of the different databases and into plain-text files in a simple, folder-based structure.

The goal was to get from a bunch of different databases to a folder and file-based structure that Hugo would understand and work with. This looks something like the following:

/content
 /articles
   /2016-01-01-some-old-article-about-css.md
   /2016-01-02-another-old-article-about-animation.md
   ...
 /printed-books
   /design-systems.md
   /digital-adaption.md
   ...
 /ebooks
   /content-strategy.md
   /designing-better-ux.md
   ...
 /jobs
   /2018-04-01-front-end-developer.md
   /2018-05-02-architecture-astronaout.md
   ...
 ...
/data
 /authors
   /matt-biilmann.yml
   /phil-hawksworth.yml
   ...
 /categories
   /css.yml
   /ux.yml

For existing projects with lots of data, this is always a challenging part of the process, and there’s no one-size-fits-all solution.

There are various tools that allow you to export from Wordpress to Hugo, Jekyll, and other static site generators, but Wordpress plug-ins often add their own metadata and database fields, so depending on the setup, these might not be bulletproof.

Smashing Magazine used a small tool called make-wp-epic that can serve as a starting point for migrations from Wordpress to the kind of content structure Hugo prefers. It does a series of queries on the database and then pipes the results through a series of transforms that end up being written to files. It gave Smashing the flexibility of writing custom transforms to reformat the raw HTML from the Wordpress post-bodies into mostly Markdown with some HTML blocks. The project required some custom regex-based transforms for some of the specific shortcodes that were used on the old Wordpress site. Because Hugo also has good shortcode support, it was just a matter of reformatting those.

A similar tool was created using the RSS feeds from the Smashing job board, combined with some scraping of the actual site to export that content. Something similar was initially tried for the Shopify content, but the data structures for the shop content ended up changing so much that the Smashing team preferred simply reentering the data.

Utilizing Structured Content

An article would have a structure like this:

---
title: Why Static Site Generators Are The Next Big Thing
slug: modern-static-website-generators-next-big-thing
image: 'https://cloud.netlifyusercontent.com/assets/344dbf88-
fdf9-42bb-adb4-
46f01eedd629/15bb5653-5a31-4ac0-85fc-c946795f10bd/jekyll-opt.
png'
date: 2015-11-02T23:03:34.000Z
author: mathiasbiilmannchristensen
description: >-
 Influential design-focused companies such as Nest and 
 MailChimp now use static website generators for their 
 primary websites. Vox Media has built a whole 
 publishing system around Middleman...
categories:
 - Coding
 - Tools
 - Static Generators
---

<p> At <a href="https://www.staticgen.com">StaticGen</a>, our
open-source directory of <strong>static website generators
</strong>, we've kept track of more than a hundred generators
for more than a year now, and we've seen both the volume and 
popularity of these projects take off incredibly on GitHub 
during that time, going from just 50 to more than 100 
generators and a total of more than 100,000 stars for static 
website generator repositories.</p>
...

This plain-text format with a YAML-based frontmatter before the post body became standard when Tom Preston-Werner launched Jekyll. It’s simple to write tooling around, and simple to read and edit by hand with any text editor. It plays well with version control and is fairly intuitive at a glance (beyond a few idiosyncrasies of YAML).

Hugo usesthe slug field to decide the permalink on the file, together with the date, so it’s important that these combine to create the same URL as the Wordpress version.

Some people intuitively think that something like the content from Smashing Magazine with thousands of articles and 10 years of history would be unwieldy to manage in a Git repository. But in terms of the pure number of plain-text files, something like the Linux kernel will make just about any publication seem tiny in comparison.

However, Smashing Magazine also had about a terabyte worth of images and assets. It was important to keep these out of the Git repository; otherwise, anyone working with the site would have to download all of them, and each Git clone operation during the continuous deployment cycles would have been painfully slow.

Instead, all the assets were uploaded to a dedicated asset store, a map of the original asset path to the new Content Delivery Network (CDN) asset URL was stored, and during the content migration, all image and asset URLs in the post metadata or the post bodies were rewritten to the new CDN URL. You’ll notice this for the image meta field in the previous example post.

This is generally the best approach for anything but lightweight sites: store content and metadata in Git, but offload assets that are not part of your “theme” to a dedicated asset store with a CDN integration.

Both authors and categories were set up as taxonomies to make sure Hugo could build out listing pages for both.

Working with Large Sites

One problem that emerged after the content had been imported into the Victor Hugo boilerplate was that building out all the thousands of production articles made editing templates with automatic rebuilds during local development feel sluggish.

To work around that, Gulp was used as a way to add a few handy utilities. First, the full set of articles was added into a production-articles folder outside of the main content folder. Then, a Gulp task that could take an extract of 100 articles and move it into the actual content/articles folder was added.

This way, everybody could work against a smaller subset of articles when doing local development and enjoy the instant-live reloading environment that JAMstack setups enable.

For production builds, these two Gulp tasks were used to prepare a prod/ folder with all the production articles before running the actual build:

gulp.task('copy-site', (cb) => {
 return gulp.src(['site/**/*', '!site/content/articles/*', 
 '!site/production-
articles/*'], { dot: true }).pipe(gulp.dest('prod', 
{overwrite: true}));
});
gulp.task('copy-articles', ['copy-site'], (cb) => {
 return gulp.src(['site/production-
articles/*']).pipe(gulp.dest('prod/content/articles', 
{overwrite: true}));
});

This meant that working on the site locally didn’t require Hugo to take thousands of articles into account for each build, and makes the development experience much smoother.

Building Out the Core

After this raw skeleton was ready, the team set up a pattern library section that Sara, the project lead we mentioned earlier, could git-clone the repository and start implementing both core patterns, like the grid, typography, and animated pull quotes before moving on to doing page layouts for the home page, article page, category page, and so on.

One of the things that really set the JAMstack approach apart compared to projects that Sara had worked on in the past was that the pattern library setup was the same as the production site setup. This meant that there was never any handover process. The core development team could simply start using the partials Sara defined to build out the actual content-driven pages and work within exactly the same framework.

Going from pattern library partials to a full site mainly involves figuring out how to get the right data from content into the partials and how to handle site listings, pagination, and relations while building out the site. This case study is not meant as a Hugo tutorial, but many of the techniques for this are similar regardless of the site generator used, so it will be useful to look at a few template examples without focusing too much of the specific idiosyncrasies of the Hugo template methods or its content model.

An obvious place to begin is looking at some of the common patterns on the main home page. Just below the fold, you’ll find a list of the seven latest articles. Every good static site generator will have ways to output filtered and sorted lists of content. In the case of Hugo, the partial looks something like this:

<div class="article--grid__container">
 {{ first 7 (where .Data.Pages.ByDate.Reverse "Section" 
 "articles")}}
   ...
   {{ partial "article--grid" . }}
   ...
 {{ end }}
</div>

Hugo has a composable query language that lets us filter a sorted list of all pages, select just the ones from the articles section, and then pick the first seven.

Before this list of latest articles, there’s a slightly more complex situation, given the top part of Smashing Magazine consists of four large hand-curated articles picked by Smashing’s editorial staff.

To handle these curated sections, the team created a JSON data file called curated.json with a structure as follows:

featured_articles:
- articles/2018-02-05-media-queries-in-2018.md
- articles/2018-01-17-understanding-using-rest-api.md
- articles/2018-01-31-comprehensive-guide-product-design.md
- articles/2018-02-08-freebie-hand-drawn-space-icons.md

The editorial staff manages this list of curated articles, and in the home-page template uses this pattern to show the highlighted articles:

{{ range $index, $featured :=
   .Site.Data.curated.featured_articles }}
   {{ range where $.Data.Pages "Section" "articles"}}{{ if eq 
   $featured .Path }}{{
partial "featured-article" . }}{{ end }}{{ end }}
{{ end }}

This is the kind of loop-based query that you’ll run into often when working with static site generators. If Smashing Magazine was a dynamically generated site where these queries would be done at runtime, this would be an awful antipattern because looping through all articles once for every single featured article would put a strain on the database, slow down site performance, and potentially cause downtime during peak traffic.

There are places where we also need to watch out for long build times due to inefficient templates, but an example like this adds only milliseconds to the total build time of the site, and because the build is completely decoupled from the CDN delivery of the generated pages, we never need to worry about runtime costs of these kinds of queries.

Search

One thing we can’t handle by generating listings up front is site search, as depicted in Figure 6-1, because this inherently needs to be done dynamically.

For search, Smashing Magazine relies on a Software as a Service (SaaS) offering called Algolia that provides lightning-fast real-time search and integrates from JavaScript in the frontend. There are other alternatives like Lunr that don’t depend on any external services and instead generate a static index file that can be used on the client side to search for content. For large sites, however, a dedicated search engine like Algolia will perform better.

During each production build, a Gulp task runs that pushes all the content to Algolia’s search index via its API.

This is where our asset pipeline and frontend framework become important. Because the actual search implementation happens client side, talking directly to Algolia’s distributed search network from the browser, it was important to be able to build a clean component-based implementation of the search UI.

Smashing Magazine ended up using a combination of Preact and Redux for all the dynamic frontend components. We won’t go in-depth with the boilerplate code for setting up the store and binding components to elements on the page, but let’s take a look at how these kinds of components work.

The actual Algolia search is done within a Redux action. It allows you to separate the concern of talking to Aloglia’s API and updating the global state of the app with the current search results from the concerns of how you present these results and from the UI layer that triggers a search.

Here’s the relevant action:

import algoliasearch from 'algoliasearch'
export const client = algoliasearch(ENV.algolia.appId, ENV.
algolia.apiKey)
export const index  = client.initIndex(ENV.algolia.indexName)
export const search = (query, page = 0) => (dispatch, getState)
 => {
 dispatch({ type: types.SEARCH_START, query })
 index.search(query, { hitsPerPage: (page + 1) * 15, page: 0 },
  (error, results) =>
{
   if (error) return dispatch({
     type: types.SEARCH_FAIL,
     error: formatErrorMessage(error)
   })
   dispatch({ type: types.SEARCH_SUCCESS, results, page })
 })
}

With this in place, you can bind an eventListener to the search input on the site and dispatch the action for each keystroke with the query the user has typed:

props.store.dispatch(search(query))

You’ll find placeholder <div>s in the markup of the site, like:

<div data-component="SearchResults" data-lite="true"></div>

The main app.js will look for tags with data-component attributes and then bind the matching Preact component to those.

In the case of the search results with the data-lite attribute, you can display a Preact component connected to the Redux store, that will display the first results from the store and link to the actual search result page.

This concept of decorating the static HTML with Preact components is used throughout the Smashing project to add dynamic capabilities.

One small extra detail for the site search is a fallback to doing a Google site search if JavaScript is disabled, by setting an action on the form that wraps the search input field:

<form data-handler="Search" method="get"
action="https://www.google.com/webhp?q=site:smashingmagazine.
com">
 <label for="js-search-input"></label>
 <div class="search-input-wrapper">
   <input type="search" name="q" id="js-search-input" 
   autocomplete="off"
placeholder="Search Smashing..." aria-label="Search Smashing" 
aria-controls="js-
search-results-dropdown" />
 </div>
</form>

In general, it’s advisable to always have fallbacks where possible when JavaScript is not available, while recognizing that for modern web projects JavaScript is the main runtime, and most dynamic interactions won’t work with the runtime disabled.

Content Management

There are a number of different approaches to content management on the JAMstack, but most involve a headless CMS. Traditionally, a CMS like Wordpress is both the tool used by admins to manage content and the tool used for building out the HTML pages from that content. A headless CMS separates those two concerns by focusing on only the content management, and delegating the task of using that content to build out a presentation layer to an entirely separate toolset.

You can find a listing of headless CMS offerings at https://headlesscms.org/. They’re divided into two main groups:

API-based CMSs

These are systems in which the content is stored in a database and made available through a web API. Content editors get a visual UI for authoring and managing content, and developers typically get a visual UI for defining content types and relations.

Contentful, Dato CMS, and GraphCMS are leaders in the space.

Git-based CMSs

These are tools that typically integrate with a Git provider’s (GitHub, GitLab, Bitbucket) API layer, or work on a filesystem that can be synchronized with Git, to edit structured content stored directly in your Git repository.

Netlify CMS is the largest open source, Git-based CMS, while Cloudcannon and Forestry are the most popular proprietary CMS solutions based on this approach.

Both approaches have advantages.

API-based CMSs can be powerful when you’re consuming the content from many sources (mobile apps, different landing pages, Kiosk software, etc.), and they can be better geared to handle highly relational content for which constraints on relations need to be enforced by the CMS. The trade-off is that to use them from a static build process, we’ll always need a synchronization step to fetch all of the content from the remote API before running the build, which tends to increase the total build time.

Git-based CMSs integrate deeper into the Git-centric workflow and bring full version control to all your content. There are inherent advantages to having all content stored in a Git repository as structured data. As developers, we can use all the tools we normally use to work on text files (scripts, editors, grep, etc.), while the CMS layer gives content editors a web-based, content-focused UI for authoring or management.

Smashing Magazine chose to build the content-editing experience on the open source Netlify CMS to get the deepest integration into Git, the best local developer experience, and the fastest build times. For an authoring team as technical as Smashing Magazine, the option of sidestepping the CMS and working directly in a code editor to tweak code examples or embedded HTML is a huge advantage over any traditional, database-driven CMS.

Integrating Netlify CMS

When going from a working design or a pattern library to a CMS-backed site, the traditional approach was to integrate into a CMS (i.e., building out a Wordpress theme or the like). Netlify CMS reverses that process and instead allows you to pull the CMS into a site and configure it to work with the existing content structure.

The most basic integration of Netlify CMS consists of adding two files to a site: an /admin/index.html file that loads the CMS SPA and a /admin/config.yml configuration file where the content structure is described so that the CMS can edit it.

The base abstraction of Netlify CMS is a “collection,” which can be either a folder with entries all having the same content structure, or a set of different files for which each entry has its own content structure (often useful for configuration files and data files).

Here’s the part of the CMS configuration that describes the full setup for the main articles collection:

collections:
 - name: articles
   label: "Smashing Articles"
   folder: "site/production-articles"
   sort: "date:desc"
   create: true # Allow users to create new documents in this 
   collection
   slug: "{{slug}}"
   fields: # The fields each document in this collection have
     - {label: "Title", name: "title", widget: "string"}
     - {label: "Slug", name: "slug", widget: "string", 
     required: false}
     - {label: "Author", name: "author", widget: "relation", 
     collection: "authors",
searchFields: ["first_name", "last_name"], valueField: 
"title"}
     - {label: "Image", name: "image", widget: "image"}
     - {label: "Publish Date", name: "date", widget: 
     "datetime", format: "YYYY-MM-DD hh:mm:ss"}
     - {label: "Quick Summary", name: "summary", widget: 
     "markdown", required: false}
     - {label: "Excerpt", name: "description", widget: 
     "markdown"}
     - {label: "Store as Draft", name: "draft", widget: 
     "boolean",required: false}
     - {label: "Disable Ads", name: "disable_ads", widget: 
     "boolean", required: false}
     - {label: "Disable Panels", name: "disable_panels", 
     widget: "boolean", required: false}
     - {label: "Disable Comments", name: "disable_comments", 
     widget: "boolean", required: false}
     - {label: "Body", name: "body", widget: "markdown"}
     - {label: "Categories", name: "categories", widget: 
  "list"}

The structure of the entries is described by a list of fields, where each field has a widget that defines how the value should be entered.

Netlify CMS comes with a set of standard widgets for strings, markdown fields, relations, and so on.

Based on this configuration, the CMS will present a familiar CMS interface (see Figure 6-2) for editing any article stored in the site/production-articles folder in the repository with a visual editor.

Netlify CMS
Figure 6-2. Netlify CMS

Netlify CMS presents a live preview of the content being edited, but out of the box it was using a generic stylesheet that was not specific to the site.

Because Netlify CMS is a React SPA and very extensible, the best approach for the live preview is to provide custom React components for each collection and pull in the actual stylesheets from the site.

Smashing did this by setting up a cms.js entry that the Webpack part of the build would process, and then using that as the base for building out preview templates and custom widgets (a bit more on those when we dive into ecommerce later), and editor plug-ins to better support some of the short codes that content editors would otherwise need to know.

That looks roughly like the following:

import React from 'react';
import CMS from "netlify-cms";
import ArticlePreview from './cms-preview-templates/Article';
import AuthorPreview from './cms-preview-templates/Author';
// ... more preview imports
import {PriceControl} from './cms-widgets/price';
import {SignaturePlugin, PullQuotePlugin, FigurePlugin} from 
'./cms-editor-
plugins/plugins';
window.CMS.registerPreviewStyle('/css/main.css');
window.CMS.registerPreviewStyle('/css/cms-preview.css');
window.CMS.registerPreviewTemplate('articles', ArticlePreview);
window.CMS.registerPreviewTemplate('authors', AuthorPreview);
// ... more preview registrations
window.CMS.registerWidget('price', PriceControl);
window.CMS.registerEditorComponent(FigurePlugin);
window.CMS.registerEditorComponent(PullQuotePlugin);
window.CMS.registerEditorComponent(SignaturePlugin);

This imports the core CMS module, which initializes the CMS instance and loads the configuration and then registers preview styles (the CSS to be included in the preview pane), preview templates (to customize previews for the different collections or data files), preview widgets (to provide custom input widgets for this project), and editor components (to add custom buttons to the rich-text editor).

The preview components are pure presentational react components that mimic the markup used for the real articles, authors, books, and so on.

A simplified version of the article component looks like this:

import React from 'react';
import format from 'date-fns/format';
import AuthorBio from './author-bio';
export default class ArticlePreview extends React.Component {
 render() {
   const {entry, fieldsMetaData, widgetFor, getAsset} = this.
   props;
   const data = entry && entry.get('data').toJS();
   const author = fieldsMetaData.getIn(['authors', data.
   author]);
   return <article className="block article" role="main">
     <div className="container">
       <div className="row">
         <div className="col col-12 col--article-head">
           {author && <AuthorBio author={author.toJS()} 
           getAsset={getAsset}/>}
           <header className="article__header">
             <div className="article__meta">
               <time className="article__date">
                 <span className="article__date__month">{ 
                 format(data.date, 'MMMM')
}</span>
                 { format(data.date, 'D, YYYY') }
               </time>
               <svg aria-hidden ="true" style={{margin: '0 
               0.75em'}} viewBox="0 0 7
7" width="7px" height="7px">
                 <title>bullet</title>
                 <rect fill="#ddd" width="7" height="7" rx="2" 
                 ry="2"/>
               </svg>
               <span className="article__comments-count">
               <a href="#">
                 <span className="js-comments-count">0</span> 
                 Comments
               </a></span>
             </div>
             <h2>{ data.title }</h2>
             <div className="article__tags">
               {data.categories && data.categories.map
                  ((category) => (
                 <span className="article__tag" key={category}>
                   <a href="#">{ category }</a>
                   <sup className="article__tag__count">1</sup>
                 </span>
               ))}
             </div>
           </header>
         </div>
         <div className="col col-4 col--article-summary">
           <p className="article__summary">
             { data.description }
           </p>
         </div>
         <div className="col col-7 article__content">
           { widgetFor('body') }
         </div>
       </div>
     </div>
   </article>;
 }
}

Listings, Search, and Assets: Customizing the CMS for Large Projects

Out of the box, Netlify uses GitHub’s content API to access all the content, and generate listings and filtering for search queries. The standard setup also assumes that all media assets are stored in the Git repository.

For really large sites like Smashing Magazine, this approach begins breaking down. Smashing Magazine has more than a terabyte of assets in the form of uploaded images, media files, PDFs, and others that’s much more than it would ever want to store in its Git repository. Apart from that, GitHub’s API begins breaking down for content listings when there’s more than 2,000 files in a folder—obviously the case for Smashing Magazine’s article collection.

Search is not just an issue within the CMS, but also for the actual website, and this will typically be the case for any larger content-driven site. So why not use the same search functionality in both places?

We shared earlier how Algolia was used to add site search functionality to Smashing Magazine’s core site. Netlify CMS allows integrations to take over functionality like asset management, collection search, or collection listings. It also comes with a plug-in for using Algolia for all search and listings instead of building this on top of the Git repository folder listings. This makes listing snappy even for very large collections.

In a similar way, the media library functionality of Netlify CMS allows for integrations, and Smashing Magazine uses this to store assets in an external asset store where they are directly published to a CDN when uploaded. That way only references to the asset URLs are stored in the Git repository.

This extensible architecture allows Netlify CMS to scale to large production sites while sticking to the core principle of bringing content editors into the same Git-based workflow we take for granted as developers.

Every architectural decision we make as developers has some trade-offs. When we work with a Git-based architecture, we might need to supplement our toolchain with asset stores, external search engines, and the like, but few developers would voluntarily agree to the trade-offs that come with storing all their code in an SQL database. So, why do it for your content when that’s inherently the most important, core part of any website?

Identity, Users, and Roles

When working with traditional monolithic applications, we typically have a built-in concept of a user that’s backed by a database collection and integrated throughout the monolithic application. This was one of the challenges of Smashing Magazine’s previous architecture: each of the monolithic applications it relied on (Wordpress, Shopify, Kirby, Rails) had its own competing concept of a user.

But as the magazine moved from monoliths to composing individual microservices for all dynamic functionality, how could it avoid making this problem even worse?

JWTs and Stateless Authentication

An important part of the answer to this is the concept of stateless authentication. Traditionally, authentication was stateful in the sense that when doing a page request to a server-rendered app, we would set a cookie with a session ID and then the application would check that against the current state in the database and decide the role, display name, and so on of the user based on this lookup.

Stateless authentication takes a different approach. Instead of passing a session ID when we call a microservice, it passes a representation of the user. This could look something like the following:

{
 "sub": "1234",
 "email": "matt@netlify.com",
 "user_metadata": {
   "name": "Matt Biilmann"
 },
 "app_metadata": {
   "roles": ["admin", "cms"],
   "subscription": {
     "id": "4321",
     "plan": "smashing"
   }
 }
}

Now each microservice can look at this payload and act accordingly. If an action requires an admin role, the service can check whether app_metadata.roles in the user payload includes an admin role. If the service needs to send an order confirmation to the user, it can rely on the email property.

This means that each microservice can look for the payload information it needs without having any concept of a central database with user information and without caring about which system was used to authenticate the user.

However, we need a way to establish trust and ensure that the payload of the user really represents a real user in the system.

JSON Web Token (JWT) is a standard for passing along this kind of user payload while making it possible to verify that the payload is legitimate.

A JWT is simply a string of the form header.payload.signature with each part of bas64 encoded to make it safe to pass around in HTTP headers or URL query parameters. The header and payload are JSON objects, and the signature is a cryptographic signature of the header and the payload.

The header specifies the algorithm used to sign the token (but the individual microservices should be strict about what algorithms they accept—the most common is HS256). Each token will be signed with either a secret or with a private key (depending on the algorithm used). This means that any microservice that knows the secret (or has a corresponding public key) can verify that the token is legitimate and trust the payload.

Normally, a JWT will always have an exp property that sets the expiration time for the token. A short expiration time ensures that if a token is leaked in some way, there’s at most a short time window to exploit it and it can’t simply be used to take over the role of a user. For that reason, JWTs are typically paired with a refresh token that can be used to retrieve a new JWT from the authentication service as long as the user is still logged in.

The Auth0 team that pioneered this standard maintains a useful resource where you can find more in-depth information and debugging tools for inspecting and verifying tokens.

API gateways and token signatures

With a naive approach to stateless authentication, every microservice in the Smashing Magazine system would need to know the same secret in order to verify the user payload. This increases the risk of leaking the secret and means that every service would be capable of issuing its own valid JWTs instead of limiting the capability to just the main identity service.

One approach to get around this is to use a signing algorithm that relies on private/public cryptographic keypairs. Then, all the services will need to look up a public key and use it to verify that the signature was generated with a private part of the key pair. This is efficient but can introduce quite a bit of complexity around having to fetch and cache public keys and gracefully handle public key rotations.

Another approach is to have an API gateway between the client and the different microservices. This way, only the gateway needs to share the secret with the identity service and can then re-sign the JWT with a secret specific to each individual microservice. It allows the system to rotate secrets independently for different services without running the risk of sharing secrets across multiple services (where some might even be owned by third parties).

GoTrue and Netlify identity

Smashing Magazine uses an open source microservice, GoTrue, to handle user sign-up and logins. It’s a microservice written in Go with a small footprint.

The main endpoints for the microservice are:

  • POST /signup: Sign up a new user

  • POST /verify: Verify the email of a user after sign-up

  • POST /token: Issue a short lived JWT + a refresh token for a user

  • POST /logout: Revoke the refresh token of the user

All of these end points are consumed client side, and the login and sign-up forms for Smashing Magazine are again built as client-side Preact components.

When a user signs up for Smashing Magazine, the Preact component triggers a Redux action that uses the gotrue-js client library like this:

return auth.signup(data.email, data.password, {
 firstname: data.firstname,
 lastname: data.lastname
}).then(user => {
 return auth.login(data.email, data.password, true).then(user 
 => {
   persistUserSession(user, noRedirect);
   return user;
 })
})

User sessions are saved in localStorage, so the UI can access the user payload and use it to show the user’s name where it makes sense or pass on the JWT to other microservices where it makes sense.

We see later how the identity of a user can be represented as a JWT and used in tandem with the other services that back the dynamic functionality of Smashing Magazine.

The idea of stateless authentication is one of the fundamental architectural principles that makes modern microservice-based architecture viable—and it’s something core to the emergence of an ecosystem of microservices that can be combined and plugged together without any foreknowledge of one another, much in the same way that the Unix philosophy lets us use pipes to combine lots of small independent command-line tools.

Where the large monolithic apps each came with their own ecosystem inside (Wordpress plug-ins, RubyGems for Rails, etc.), we’re now seeing a new ecosystem emerge at the level of ready-made microservices, both open source and fully managed, that frontend developers can use to build large applications without any backend support.

Ecommerce

Smashing Magazine derives a large part of its income from its ecommerce store that sells printed books, ebooks, job postings, event tickets, and workshops.

Before the redesign, Smashing Magazine used Shopify, a large, multitenant platform that implements its own CMS, routing layer, template engine, content model, taxonomy implementation, checkout flow, user model, and asset pipeline.

As part of the redesign, Smashing Magazine implemented a small, open source microservice called GoCommerce. It weighs in at about 9,000 lines of code, including comments and whitespace.

GoCommerce doesn’t implement a product catalog and it has no user database, routing logic, or template engine. Instead, its essence is two API calls:

  • POST /orders

    Create a new order, with a list of line items, example: {"line_items": ["/printed-books/design-systems/"]

    GoCommerce fetches each path on the site and extracts metadata with pricing data, product type (for tax calculations), title, and so on and constructs an order in its database. The website itself is the canonical product catalog.

  • POST /orders/:order_id/payments

    Pay for an order with a payment token coming from either Stripe or Paypal.

GoCommerce will validate that the amount matches the calculated price based on the product metadata and tax calculations (as well as any member discounts, coupons, etc.).

The product catalog is built out by Hugo during the build step, and all products are managed with Netlify CMS. A simplified version of an ebook file stored in the Git repository looks like this:

---
title: 'A Career On The Web: On The Road To Success'
sku: a-career-on-the-web-on-the-road-to-success
image: >-
 //cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-
46f01eedd629/133e54ba-e7b5-40ff-957a-f42a84cf79ff/on-the-road-
to-success-365-
large.png
description: >-
 There comes a time in everyone's career when changing jobs is 
 the natural next step. But how can you make the most of this
 situation and find a job you'll love?
price:
 eur: '4.99'
 usd: '4.99'
published_at: March 2015
isbn: 978-3-945749-17-3
files:
 pdf: >-
   //api.netlify.com/api/v1/sites/344dbf88-fdf9-42bb-adb4-
46f01eedd629/assets/58eddda0351deb3f6122b93c/public_signature
---

<p>There comes a time in everyone's career when changing jobs 
is the natural next step. Perhaps you're looking for a new
challenge or you feel like you've hit a wall in your current 
company? Either way, you're standing at a crossroad, with an 
overwhelming amount of possibilities in front of you. But how 
can you make the most of this situation? How can you <strong>
find a job you will truly love</strong>?</p>

When Hugo builds out the detail page for an ebook, it outputs a <script> tag with metadata used for GoCommerce that looks like this:

<script class="gocommerce-product" type="application/json">
{
 "sku": "a-career-on-the-web-assuming-leadership",
 "title": "A Career On The Web: Assuming Leadership",
 "image": "//cloud.netlifyusercontent.com/assets/344dbf88-fdf9-
 42bb-adb4-
46f01eedd629/3eb5e393-937e-46ac-968d-249d37aebcef/assuming-
leadership-365-
large.png",
 "type": "E-Book",
 "prices": [
   {"amount": "4.99", "currency": "USD"},
   {"amount": "4.99", "currency": "EUR"}
 ],
 "downloads": [{"format":"pdf","url":"//api.netlify.com/api
 /v1/sites/344dbf88-fdf9-
42bb-adb4-46f01eedd629/assets/58edd9f7351deb3f6122b911/public_
signature"}]
}
</script>

This is the canonical product definition that will be used both when adding the product to the cart client side and when calculating the order price in GoCommerce.

GoCommerce comes with a small JavaScript library that helps with managing a shopping cart, does the same pricing calculations client-side as GoCommerce will do server side when creating an order, and handles the API calls to GoCommerce for creating orders and payments and querying order history.

When a user clicks a “Get the e-book” button, a Redux action is dispatched:

gocommerce.addToCart(item).then(cart => {
 dispatch({
   type: types.ADD_TO_CART_SUCCESS,
   cart,
   item
 })
})

The item we add to the card has a path, and the GoCommerce library will fetch that path on the site with an Ajax request and look for the <gocommerce-product> script tag and then extract the metadata for the product, run the pricing calculations, and store the updated shopping cart in localStorage.

Again, we have a Preact component connected to the Redux store that will display the shopping cart in the lower-left corner of the page if there are any items in the store.

The entire checkout process is implemented completely client side in Preact. This proved tremendously powerful for the Smashing team because it could customize the checkout flows in so many different ways. Tweak the flow for a ticket to include capturing attendee details, tweak the flow for a job posting to include capturing the job description and settings, and tweak the flow for products requiring shipping versus purely digital products like ebooks.

There’s an open source widget available at https://github.com/netlify/netlify-gocommerce-widget that has an example of this kind of checkout flow implemented in Preact + MobX, and you can even use it as a plug-and-play solution for a GoCommerce site.

During the checkout flow, Preact forms capture the user’s email, billing, and shipping address, and the user can either select PayPal or credit card as a payment option. In either case their respective JavaScript SDKs are used client side to get a payment token for the order. The last part of the checkout flow is a confirmation screen with the order details. When the user selects “Confirm,” the following Redux action takes care of the actual sale:

export const putOrder = (dispatch, getState) => {
 const state = getState()
 const { order, cart, auth } = state
 dispatch({ type: types.ORDER_CONFIRM_START })
 // Create a new order in Gocommerce
 gocommerce.order({
   email: order.details.email,
   shipping_address: formatAddress(order.details.
   shippingAddress),
   billing_address: formatAddress(order.details.billingAddress)
 }).then(result => {
   // Create a payment for the order with the stripe token or 
   paypal payment
   return gocommerce.payment({
       order_id: result.order.id,
       amount: result.cart.total.cents,
       provider: order.paypal ? 'paypal' : 'stripe',
       stripe_token: order.key,
       paypal_payment_id: order.paypal && order.paypal.
       paymentID,
       paypal_user_id: order.paypal && order.paypal.payerID
   }).then(transaction => {
     // All done, clear the cart and confirm the order
     if (auth.user) dispatch(saveUserMetadata(order.details))
     dispatch(emptyCart)
     dispatch(clearOrder)
     dispatch({
       type: types.ORDER_CONFIRM_SUCCESS,
       transaction: Object.assign({}, transaction, { order: 
       result.order }),
       cart
     })
   })
 }).catch(error => dispatch({
   type: types.ORDER_CONFIRM_FAIL,
   error: formatErrorMessage(error)
 }))
}

There’s a bit to digest there, but the core of it comes down to first calling gocommerce.order with the email and shipping or billing addresses of the user, and then the GoCommerce library adds the items from the cart to the order.

As long as that goes well, we then call gocommerce.payment for the order with the amount we’ve shown to the user client side and a payment method and token.

GoCommerce looks up the actual line items from the order; does all the relevant pricing calculations, taking into account taxes, discounts, and coupons; and verifies that the amount we’ve shown client side matches the calculated order total. If all is well, GoCommerce triggers a charge with the relevant payment method. If that works, it sends out a confirmation email to the user and an order notification to the shop administrator.

Identity and Orders

GoCommerce has no user database and no knowledge about the identity service in use. At the same time, the setup needs to ensure that logged-in users are able to access their order history and previous shipping or billing addresses.

Stateless authentication to the rescue: you can configure GoCommerce with a JWT secret, and then any API request signed with a JWT that can be verified with that secret will be associated with a user, based on the sub attribute of the JWT payload. Note that the attribute is part of the JWT standard and indicates the unique ID of the user identified by the token.

Client side this is handled in the JS library by using gocommerce.setUser(user) with a user object that responds to the user.jwt() method that returns a token wrapped in a promise. The promise part is important because the user object might need to exchange a refresh token to a valid JWT in the process.

After the user is set, the GoCommerce library signs all API requests with a JWT.

Calling the GET /orders endpoint with a JWT returns a listing of all the orders belonging to the user identified by the token. Using that endpoint and the identity service, Smashing Magazine could build out the entire order history panel client side with Preact components.

Membership and Subscriptions

The introduction of a membership feature was one of the biggest motivations for Smashing Magazine’s decision to relaunch its site and build a new architectural platform from the ground up. It wanted to offer readers a three-tiered subscription plan.

So far, we’ve seen that the magazine tackled identity and ecommerce with open source microservices. The implementation of subscriptions introduces a new pattern that’s becoming an increasingly popular component of JAMstack projects: using serverless functions to glue together existing services and introduce new functionality.

Serverless is often a contested term. Obviously, servers are still involved somewhere, but the option to simply write some glue code without having to worry about where and how that code is executed can be an incredibly powerful part of our toolbox.

At this phase of the migration, Smashing Magazine had all of the pieces that needed to be stitched together in place: GoTrue for identifying users, Stripe for accepting payments and handling recurring billing, MailChimp to handle email lists with groups for member and plan-specific emails, and Netlify’s CDN-based rewrite rules to selectively show different content to users with different roles.

To build a fully-fledged subscription service, all it needed was a bit of glue code to tie all of these together. AWS Lambda functions proved to be a great way to run this code without having to worry about running a server-side application somewhere.

To become a member, a user needs to fill out the sign-up form shown in Figure 6-3.

Let’s go through what happens when this Preact component-based form is filled out and the user clicks the action button.

Assuming that all of the data entered passes the basic client-side validations, a new user is created in GoTrue via the sign-up endpoint and the script gets hold of a JWT representing the new user.

Smashing Magazine sign-up form
Figure 6-3. Smashing Magazine sign-up form

Then, Stripe’s “Checkout” library is used to exchange the credit card details for a Stripe token.

After these two preconditions are both successful, a Redux action is triggered, calling a Lambda function deployed via Netlify and triggered by an HTTP request:

fetch('/.netlify/functions/membership', {
 headers: {Authorization: `Bearer ${token}`},
 method: 'POST',
 body: JSON.stringify({ plan, stripe_token: key })
})

The Lambda function handler then triggers a method called subscribeToPlan that looks like this:

function subscribeToPlan(params, token, identity) {
 // Fetch the user identified by the JWT from the identity 
 service
 return fetchUser(token, identity).then(user => {
   console.log("Subcribing to plan for user: ", user);

   // Check if the user is already linked to a Stripe Customer
   // if not, create a new customer in Stripe
   const customer_id = user.app_metadata.customer
     ? user.app_metadata.customer.id
     : createCustomer(user);
   return Promise.resolve(customer_id)
     .then(customer_id => {
       // Check if the user has an existing subscription
       if (user.app_metadata.subscription) {
         // Update the existing Stripe subscription
         return updateSubscription(
           customer_id,
           user.app_metadata.subscription.id,
           params
         ).then(subscription => {
           // Add the user to the MailChimp list and the right 
           Group
           addToList(user, params.plan);
           // Notifiy about the change of plan in Slack
           sendToSlack(
             'The user ' + user.email + ' changed from ' +
             user.app_metadata.subscription.plan + ' to ' + 
             params.plan
           );
           return subscription;
         });
       }

       // No existing subscription, create a new subscription 
       in Stripe
       return createSubscription(customer_id, params).then(
       subscription => {
         // Add the user to the MailChimp list and the right 
         Group
         addToList(user, params.plan);
         // Notify about the new subscriber in Slack
         sendToSlack('Smashing! The user ' + user.email + ' 
         signed up for a ' +
params.plan + ' plan');
         return subscription;
       });
     })
     // In the end, update the user in the identity service.
     // This will update app_metdata.customer and app_
     metadata.subscription
     // for the user
     .then(subscription => updateUser(user, subscription, 
     identity));
 });
}

There are a few things happening here. For each of the external services, like Stripe, MailChimp, and Slack, the Lambda function has access to the relevant API keys and secrets via environment variables. When working with different external APIs, this is one of the key things that we can’t do in client-side JavaScript, because no variable exposed there can be kept secret.

In this case, the behavior of the identity service is a little different. When a Lambda function is deployed via Netlify on a project that has an identity service, the function will have privileged access to the identity service. This is a common pattern, and you can similarly set up Lambda function using Amazon CloudFront and Amazon API Gateway together with AWS’s identity service (Cognito) to have privileged access.

Note how the user metadata is fetched from the identity server instead of relying on the data from the JWT. Because JWTs are stateless and have a lifetime, there are cases for which the information could be slightly out of date, so it’s important to check against the current state of the user.

The effect of the Lambda function is to use the Stripe token to create a recurring subscription in Stripe and add the Stripe customer and subscription ID to the app_metedata attribute of the user. It also subscribes the email of the user to a MailChimp list and triggers a notification in an internal Slack channel about the new subscriber.

After a subscriber has an app_metadata.plan attribute, Smashing Magazine takes advantage of Netlify’s JWT-based rewrite rules to show different versions of the membership pages depending on which user is logged in. The same could be achieved with other CDNs that allow edge logic around JWT, edge Lambda functions, or similar edge functions.

This highlights how something like a full membership engine was built with almost no custom server-side code (the full membership.js file is 329 lines long), and with a serverless stack where the developers on the project never had to worry about operational concerns around their code.

Tying It Together: Member Discounts in GoCommerce

A selling point of memberships is discounts on books, ebooks, and conference tickets. But how do we approach this in the JAMstack world of small, independent microservices?

Although GoCommerce has no knowledge of the existence of the GoTrue-based identity service, it understands the concept of JWTs and can relate an order to a user ID by looking at the sub property of the token payload.

In the same way, GoCommerce can apply discounts to orders if a predefined discount matches the token payload. But where do we define the discounts?

Again, GoCommerce uses the technique of making the website the single source of truth. Just like GoCommerce uses the website as the authoritative product database by looking up product metadata on a path, it also loads a /gocommerce/settings.json file from the website and refreshes this regularly.

The settings.json holds all the tax settings that both GoCommerce and the gocommerce-js client library use for pricing calculation. It can also include a member_discounts setting looking something like this:

"member_discounts": [
 {
   "claims": {"app_metadata.subscription.plan": "member"},
   "fixed": [
     {"amount": "10.00", "currency": "USD"},
     {"amount": "10.00", "currency": "EUR"}
   ],
   "product_types": ["Book"]
 }
]

This tells GoCommerce that any logged-in user who has a JWT payload looking something like what follows should get a discount on any line item with the type “Book” (determined from the product metadata on the product page) of either $10 or 10€ depending on the currency of the order:

{
 "email": "joe@example.com",
 "sub": "1234",
 "app_metadata": {"subscription": {"plan": "member"}}
}

Note again how there’s no coupling at all between GoCommerce and the identity service or the format in which membership information is represented in the JWT payload. Any pattern in the claim can be used to define a user with a status that should give special discounts. GoCommerce also doesn’t care how the settings.json is managed. We could easily set up Netlify CMS to give a visual UI for managing this, generate it from data stored in an external API, fetch the data from a Google Sheet, and generate the JSON or have developers edit it by hand. All these concerns are kept completely separate.

Job Board and Event Tickets: AWS Lambda and Event-Based Webhooks

In the previous section, we saw how GoCommerce could act on a user’s membership plan and calculate discounts, without any coupling between the membership implementation, the identity service, and GoCommerce.

The implementation of Smashing Magazine’s job board highlights a similar concern. In this case, some action needs to happen when someone buys a product with the type “Job Posting,” but because GoCommerce is a generic ecommerce microservice, it has no knowledge of what a “Job Posting” is or how to update a job board.

However, GoCommerce has a webhook system that allows it to trigger HTTP POST requests to a URL endpoint when an order has been created. Webhook requests are signed with a JWT-based signature, allowing the service receiving the request to verify that it’s generated from an actual GoCommerce order and not by someone triggering the hook directly.

This again offers us the ability to glue loosely coupled components together by using serverless functions.

Smashing Magazine approached its job board as any other listing/detail part of its site, built from a collection of markdown files with frontmatter by Hugo.

Instead of somehow integrating job board functionality into GoCommerce, the team deployed an AWS Lambda function that serves as the webhook endpoint for GoCommerce payments. This means that each time an order has been completed and paid for, GoCommerce triggers the Lambda function with a signed request. If the “type” of the product in the event payload is a “Job Posting,” a processJob method is triggered:

function processJob(payload, job) {
 console.log("Processing job", job.meta);
 const content = `---\n${yaml.safeDump({
   title: job.meta.title || null,
   order_id: payload.order_id || null,
   date: payload.created_at || null,
   logo: job.meta.logo || null,
   commitment: job.meta.commitment || null,
   company_name: job.meta.company_name || null,
   company_url: job.meta.company_url || null,
   jobtype: job.meta.jobtype || null,
   location: job.meta.location || null,
   remote: job.meta.remote || null,
   application_url: job.meta.application_url || null,
   featured: job.meta.featured || false
 })}\n---\n\n${job.meta.description || ""}`;
 const path = `site/content/jobs/${format(
   payload.created_at,
   "YYYY-MM-DD"
 )}-${payload.id}-${urlize(job.title)}.md`;
 const branch = "master";
 const message = `Create Job Post
This is an automatic commit creating the Job Post:
"${job.meta.title}"`;
 return fetch(
   `https://api.github.com/repos/smashingmagazine/smashing-
magazine/contents/${path}`,
   {
     method: "PUT",
     headers: { Authorization: `Bearer ${process.env.
     GITHUB_TOKEN}` 
  },
     body: JSON.stringify({
       message,
       branch,
       content: new Buffer(content).toString("base64")
     })
   }
 );
}

This extracts the metadata related to the job posting from the order object, formats it into markdown with YAML frontmatter, and then uses GitHub’s content API to push the new job posting to the main Smashing Magazine repository.

This in turn triggers a new build, where Hugo will build out the updated version of the job board and Netlify will publish the new version.

In a similar way, orders for event tickets trigger a function that adds the attendee information to a Google Sheet for the event that the Smashing team uses to print badges and verify attendees at the event check-in. All of this happens without any need for GoCommerce to have any concept of a ticket or an event besides the product definition in the metadata on each event page.

Workflows and API Gateways

This case study has shown a consistent pattern of a core, prebuilt frontend, using loosely coupled microservices, glued together with serverless functions either as mini-REST endpoints or as event-triggered hooks.

This is a powerful approach and a big part of the allure of the JAMstack. Using a CDN-deployed static frontend that talks to dynamic microservices and uses serverless functions as a glue layer, small frontend teams can take on large projects with little or no operations and backend support.

Pulling this off does require a high level of maturity in automation, API routing, secrets management, and orchestration of your serverless code, with a viable workflow and a solid, maintainable stack.

Making the CMS integration work, indexing to an external search engine viable, and powering the Git-based job board required a tightly integrated continuous deployment pipeline for the Gulp-based build and ample support for staging environments and pull request previews.

The integrated, lightweight API gateway layer was necessary to manage routing to the different microservices and ensure one set of services was used from pull request or staging deploys while the production instances were used from master branch deploys.

In this case, Netlify’s gateway layer also handled key verification at the edge, so each microservice could have its own JWT secret and only the CDN had the main identity secret. This allows the CDN to verify any JWT from the identity service in a call to a service and swap it with a JWT with the same payload that is signed with the secret specific to the individual service.

It also proved key to having an integrated workflow for deploying the serverless functions together with the frontend and automating the routing and invocation layer, so that testing an order checkout flow with a deploy preview of a pull request would trigger the pull request–specific version of the webhook Lambda function.

We can’t imagine that anyone working on large backend or infrastructure system based on a microservice architecture would dream of doing this without a solid service discovery layer. In the same way, we see service discovery for the frontend becoming not only more relevant, but essential as developers move toward decoupled, microservice-based architectures for our web-facing properties.

Deploying and Managing Microservices

The new Smashing Magazine is essentially a static frontend served directly from a CDN, talking through a gateway to various microservices. Some of them are managed services like Stripe or Algolia, but the project also had several open source microservices like GoTrue, GoCommerce, Git Gateway, as well as GoTell, and in the beginning the different webhook services were deployed as one service called Smashing Central.

At the start of the project, all of these were deployed to Heroku and used Heroku’s managed Postgres database for the persistence layer.

One thing that’s essential when working with a microservice-based architecture is that each microservice should own its own data and not share tables with other services. Having multiple services share the same data is an antipattern and makes deploying changes with migrations or making hosting changes to initial services a cumbersome process. Suddenly all the microservices are tightly coupled and you’re back to building a monolith but with more orchestration overhead.

After Netlify launched managed versions of GoTrue, GoCommerce, and Git Gateway, we migrated the endpoints there. These services run in a Kubernettes cluster, and this is becoming a more and more standard setup for the microservice layer in the JAMStack. The team could do this as a gradual process, moving these services one by one to the managed stack without interruption given that there was no coupling between them.

The Smashing Central service that was the one microservice written specifically for the Smashing Magazine process eventually was completely replaced with Lambda functions deployed through Netlify together with the frontend.

We generally see serverless functions as the preferred option whenever there are small custom microservices that don’t need a persistence layer, with Kubernetes (often in managed versions) emerging as the main deployment target for services that have too large of a surface area to be a good fit for systems like AWS Lambda.

Summary

Smashing Magazine moved from a system built from four traditional monolithic applications, with four independent frontend implementations of the same visual theme that had been adapted to the quirks of each platform, to a setup in which the entire implementation was driven by and based on one single static frontend talking to a set of different microservices using a few serverless functions as the glue in the middle.

The team that built this project was very small and entirely frontend focused. All of the backend services were either open source microservices (GoTrue and GoCommerce) or third-party managed services (Stripe, Algolia, and MailChimp). The total amount of custom, server-side code for this entire project consisted of 575 lines of JavaScript across 3 small AWS Lambda functions.

Any interaction a visitor has with the site while browsing magazine content, viewing the product catalog, or perusing the job board for new opportunities happens without any dynamic, server-side code—it’s just a static frontend loaded directly from the closest CDN edge node. It is only when an action is taken (confirming an order, posting a comment, signing up for a plan, etc.) that any dynamic code is used, either in the form of microservices or Lambda functions.

This makes the core experience incredibly performant and removes all maintenance concerns from essential parts of Smashing Magazine. Before the change, the kind of viral traffic that Smashing Magazine frequently received would cause reliability issues regardless of the amount of Wordpress caching plug-ins in use. Plus, the work required to constantly keep Wordpress (and Rails and Kirby) up to date without breaking the site ate up most of the budget for site improvements or updates.

With the new site, it’s been straightforward for the team to work on new sections, design tweaks, cute little touches to the different checkout flows, and performance improvements. Anyone can run the full production site locally just by doing a Git clone of the repository and spinning up a development server, with no need to set up development databases. Any pull request to the site is built to a new preview URL where it can be browsed and tested before being merged in—including any new article that the team is preparing to publish.

The Git-based CMS has proved a great approach for allowing both web-based content editing with live previews. It has also enabled developers to dive in through their code editors. Having all files as plain text in a Git repository makes scripting Bash operations (like inserting ad panels or doing content migrations) easy and brings full revision history to all content edits.

Though tools like Netlify CMS or the GoCommerce store admin are not currently as mature as the tools with 15-plus years of history, and there are still some rough edges to be worked out, it’s without a doubt that the Smashing Magazine team has benefited significantly from this shift. And perhaps more important, so have its readers.