To adopt the JAMstack for your next project, your team will need to have a good mental model of what the JAMstack means. There’s one commonly held mental model to watch out for, as it’s not quite correct:
The JAMstack delivers static sites.
This is a convenient, bite-sized summary. But it misses the mark.
Sites designed for the JAMstack are sites that are capable of being served with static hosting infrastructure, but they can also be enriched with external tools and services.
This small shift in mindset is a powerful thing. A static site is a site that does not often change and does not offer rich interactions. But JAMstack sites go far beyond offering static experiences—whether that be through rapid iterations thanks to build automation and low-friction deployments, or through interactive functionality delivered by microservices and external APIs.
The ability to serve the frontend of a site from CDN infrastructure optimized for static assets should be viewed as a superpower rather than a limitation.
Although traditional, monolithic software platforms need to introduce layers of complexity for the sake of performance, security, and scale, JAMstack sites embody this by default. And because their technical design is based around the premise of being hosted statically, we are able to avoid many of the traps that traditional infrastructure can fall afoul of when being scaled and hardened.
The six best practices presented in the following sections can help in serving your sites from static infrastructure (without frontend web servers) and propel your projects to use all the possibilities of the JAMstack.
A Content Delivery Network (CDN) is a geographically distributed group of servers that allows you to cache your site’s content at the network edge, effectively bringing it much closer to users. Because the markup is prebuilt, JAMstack sites don’t rely on server-side code and lend themselves perfectly to being hosted on distributed CDNs without the need for origin infrastructure.
A good CDN will have points of presence around the globe and is therefore able to serve your content from locations geographically close to your users, wherever they might be. This has a number of benefits.
First, this promotes good performance. The shorter the distance between the user making a request for content and the server that responds to that request, the faster bytes can get from one to the other.
Second, by replicating the content being served across many CDN nodes around the world, redundancy is also introduced. Certainly, we’d want to serve content to users from the CDN node closest to them, but should that fail for any reason, content can instead be served from the next nearest location. The network aspect of a CDN provides this redundancy.
Third, by becoming the primary source for the content being served to your users, a CDN prevents your own serving infrastructure from becoming a single point of failure (SPoF) while responding to requests. Do you recall the fear of your site quickly becoming popular and attracting massive traffic levels? Oh, the horror! People would talk about the ability to “survive being slash-dotted” or reaching mass adoption.
Designing a web architecture to be resilient to large traffic loads and sudden spikes in traffic has traditionally called for the addition of lots of infrastructure. When your site isn’t equipped to offload the serving of requests to a CDN, you must shoulder this responsibility yourself. Often this results in an architecture that involves multiple web servers, load balancers, local caching mechanics, capacity planning, and various layers of redundancy. When the site is being served dynamically and in response to individual requests, the coordination of these resources can become very complex. Specialist teams are generally required to do this properly at scale, and their job is never complete given the need for ongoing monitoring and support of the machinery that will service every request for a site.
Meanwhile, CDNs are in the business of servicing high volumes of traffic on our behalf. They are designed to offer redundancy, invisible and automatic scaling, load balancing, caching, and more. The emergence of specialist CDN products and services brings opportunities to outsource all of this complexity to companies whose core business is providing this type of resilience and scale in ways that do not require you to manage the underlying infrastructure.
The more of our site’s assets we can get onto a CDN, the more resilient it will be.
How do we design our applications to get the best results from using a CDN?
One approach is to serve just the containing user interface (UI) statically from the CDN and then enrich it entirely with client-side JavaScript calls to content APIs and other services. This works wonderfully for some applications or sites where the user is taking actions rather than primarily consuming content. However, we can improve performance and resilience if we deliver as much content as possible in the document as prerendered HTML. Doing this means that we do not depend on additional requests to other services before core content can be displayed, which is good for performance and fault tolerance.
The more complete the views we can deliver to the CDN and pass on to the browser, the more resilient and performant the result is likely to be. Even though we might be using JavaScript in the browser to enrich our views further, we should not forget the principles of progressive enhancement, and aim for as much as possible to be delivered prerendered in the initial view of each page.
When designing the technical approach to web applications, there are a few commonly used terms that seem to be regularly confused. We’ve alluded to a couple of them in the previous chapters, but it is worth taking a moment to clarify the differences between some of these common terms:
Client-side rendering
Server rendering
Prerendering
Client-side rendering is performed in the browser with JavaScript. This term is not used to refer to the work that browsers do by default to render HTML documents; instead, it describes the process by which JavaScript is used to manipulate the Document Object Model (DOM) and dynamically manipulate what the browser displays. It has the advantage of allowing user actions to immediately influence what is displayed, and it also can be very efficient in combining data from many different sources into templated views. Web browsers, though, are not as tolerant to errors found in JavaScript as they are with HTML. Where possible, it is prudent to deliver as much content as possible already rendered into HTML on the server, which then can be further enriched and enhanced with JavaScript in the browser.
Server rendering refers to the process whereby a server responds to requests for content by compiling (rendering) the required content into HTML and delivering it on demand for the browser to display. The term is often used as the antithesis of client-side rendering, when data (rather than ready-generated HTML) is delivered into the browser where it must then be interpreted and rendered into HTML or directly manipulate the browser’s DOM via JavaScript. (Client-side rendering such as this is common in single-page applications [SPAs].) Server-rendering typically happens just-in-time based on what requests are being made.
Prerendering performs the same task of compiling views of content as we see in server rendering, but this is carried out only once and in advance of the request for the content. Rather than awaiting the first request for a given piece of content before we know if the result will be correct, we can determine this at build time.
Decoupling this rendering or generation of views from the timing (and machinery) at request time allows us to expose the risk early and tackle any unknowns of this operation far in advance of the times a request for the content is made by a user. By the time requests for prerendered content arrives, we are in a position to simply return the prebaked response without the need to generate it each time.
By generating a site comprising assets that can be served statically from a CDN, we have set ourselves on an easier path to optimizing for performance and sidestepping many common security attack vectors. But it is very likely that our site builds will contain a large number of files and assets, all of which will need to be deployed in a timely and confident manner.
The next challenge, therefore, is to establish very low friction and efficient deployment processes to deal with the propagation of so many assets.
When we make the process of deploying a site fast with minimal risk, we unlock greater potential in the development and publishing process, which in turn can yield more efficient and effective development and release cycles.
It was once very common to deploy sites via FTP. It felt simple and convenient, but this method was the root of many problems.
The action of moving files and folders from one server to another over FTP would change the state of the production server, updating its published web root folder to include the new or updated files and mutating the state of the production environment more and more over time.
Mutating the condition of a production environment might not seem like a problem. After all, the reason we deploy new code is to make changes in our production environment and develop the experience for our users by introducing new content, new designs, or new functionality. But this type of deployment is destructive. After each deployment, the previous state of the environment is lost, meaning that reverting to the last good state of the site could be difficult or impossible to do.
The ability to revert is critical. Without it, we need to be able to guarantee that every deploy will be perfect, every time. That’s a lovely goal, but experience tells us that we can’t assure that with 100% confidence, and that the ability to quickly address problems resulting from a deployment is essential.
If we combine mutable deployments and managing our own complex hosting infrastructure, the potential issues are compounded further, with deployments potentially resulting in sites that have unknown versions being served to different users, or errant code being propagated to some parts of the infrastructure and not others, with no clear path to identify what has changed and how to address problems.
To minimize the risks associated with hosting environments serving assets jumbled together from a variety of deployment events, the industry began to discard mutable deployments, in which the site’s web root could change over time, and instead favor immutable deployments where a new version of the entire published web root would be compiled and published as a discrete entity, as illustrated in Figure 5-1.
Whereas mutable deployments modify the code and state of our environments, changing them gradually over time, immutable deployments are different. They are delivered as entirely self-contained sets of assets, with no dependency on what came before, and associated with a moment in time in our version control system. As a result, we gain confidence and agility.
Confidence comes from knowing that what we are deploying is what will be served by our production environment. Without immutable deployments, each deployment changes the state of what lives in the production environment and is being served to the world. The live site ends up being the result of a compounded series of changes over time. The actions taken in each of our environments all contribute to that compounded series of changes, so unless every action in every environment is identical, uncertainty will creep in. (We mention production, but this applies to every environment, such as development, staging, testing, and so on).
The actions we take over time are aggregated into what is ultimately published from each environment.
In contrast, an immutable deployment process removes this uncertainty by ensuring that we create a series of known states of our website. Encapsulated and self-contained, each deployment is its own entity and not affected by any actions taken in the environment previously. Each deployed version persists unadulterated, even after subsequent deployments. Compound effects in environments are avoided so that we can gain the certainty and confidence lacking in mutable deploys.
It might seem cumbersome to generate and deploy an entirely new build each time we want to make even a small change to our sites, but because this leaves previous deployments intact, we gain the ability to switch between different versions of our site at any time.
And by automating and scripting the build and deployment processes, we can drop the friction associated with building and deploying sites to close to zero, making this process far less cumbersome. Regularly running builds even helps to prove the validity of our build process itself, ensuring that our build and deployment process is robust and battle hardened far in advance of launch day.
The agility to roll back to a previous build or to stage a new build in our environment for testing before designating it the new production version, cultivates an environment in which the risks involved in performing deployments are reduced. By avoiding unrecoverable failures that are the result of a deployment, agility and innovation are given room to thrive on projects. It creates a very constructive development experience rooted in the confidence that our target environments are not fragile, and that they can withstand the demands of active development work. Certainly, we would wish to minimize the risk of any errors being published, but it can happen. With atomic deployments, we dramatically decrease the cost of an error by making it trivial to revert to a previous good state of our site.
By choosing the JAMstack for a project, you’re already well suited to a deployment process that yields immutable deploys. The build process for the project will generate a version of the site and its assets that is ready to be placed into a hosting environment.
Decoupling that which can be hosted statically from the moving parts of a project that might manage state or contain dynamic content is a critical step. There is a natural decoupling inherent in JAMstack sites, for which data sources and APIs are either consumed at build time, or abstracted to other systems, possibly provided by third parties.
The ability to achieve an immutable deployment process will also depend on the nature of the hosting environment. No matter how we transmit our assets to that environment, it must be configured to generate a new site instance each time changes are deployed to it.
A challenge with creating multiple, immutable versions of our deployments is that the action of deploying a new version of our site, even a one-line change, requires the generation and propagation of a completely fresh instance of the entire site. This could result in the transmission of a lot of files, taking a lot of time and bandwidth. In traditional mutable deployments, this could result in files from many different deployments being served together, with no protection against interdependencies not being synchronous.
To address this, a good practice is to employ atomic deployment techniques.
An atomic deployment is an all-or-nothing approach to releasing new deployments into the wild. Figure 5-2 shows that as each file is uploaded to its hosting infrastructure, it is not made available publicly until every single file that is a member of that deployment is uploaded and a new, immutable build instance is ready to be served. An atomic deployment will never result in a partially completed build.
Most good CDN providers offer this service. Many of them use techniques to minimize the number of individual files being transmitted between their nodes and the origin servers which seed all of the caching. This can become tremendously complex for sites that are not designed as JAMstack sites, and considerable specialist skills are required to do this correctly when the files in question are populated dynamically.
JAMstack sites are well suited to such techniques because they will avoid the considerable complexity of needing to manage the state of databases and other complex moving parts throughout deployments. By default, the files that will be distributed to CDNs are static files rather than dynamic, so common version control tooling can allow CDNs to consume and distribute them with confidence.
The success of any development project can be compromised when robust version control management, such as Git, is not in place. It is common in web development for projects to employ version control throughout the development process, but then not extend the same version control conventions into the deployment process itself. Heroku, a popular cloud application platform, popularized the use of Git workflows as a means to perform a deployment, raising the confidence in a reliably deployed end product.
Having the version control reach all the way to deployment can help to minimize the opportunities for untracked, unrepeatable, or unknown actions to creep into the processes and environments.
A dependency on databases and complex applications can make the task of this kind of end-to-end version control more difficult. But because JAMstack sites don’t rely on databases, it’s far easier to bring more aspects of the code, content, and configuration under the umbrella of version control. Further, the use of the version control system itself can become the primary mechanic for doing everything from initiating and bootstrapping a project, right through to staging and deploying the code and the content.
Every aspect of a project that we can bring into version control is one less thing that is likely to get out of control.
Bringing new developers into a project can be a time-consuming and error-prone activity. Not only do developers need to understand the architectural conventions of the project, they must also learn what dependencies exist, and acquire, install, and configure all of these before any work can begin.
Version control tools can help. By tracking all of the project’s dependencies in version control and ensuring that any configuration of these dependencies is also captured in code and version controlled, we can apply some popular conventions to simplify the bootstrapping of projects and the onboarding process.
Ideally, a developer arriving onto a new project would be able to get up and running with minimal intervention. A good experience for a developer might look like this:
Gain access to the project code repository.
Find up-to-date, versioned instructions on bootstrapping the project within the code repository, typically in a README file.
Clone the repository to a local development environment.
Run a single command to install project dependencies.
Run a single command to run a local build with optimizations for the designed development workflow.
Run a single command to deploy local development to suitable target environments.
Maintaining onboarding documentation in a README file within a project’s code repository (rather than in another location) helps with its discoverability and maintenance. That’s a good first step, but the other items on this list might seem more challenging.
Thankfully, the rise of package managers such as npm and Yarn have popularized the techniques that can help here. The presence of a package.json file in a project’s root directory (which defines the various dependencies, scripts, and utilities for a project) combined with suitably descriptive documentation as part of the version-controlled code base can allow a developer to run single commands to perform the dependency installation, local build, and even deployments.
This approach not only allows for the automation of these activities, but also documents them in a way that is itself versioned. This is a huge benefit for the health of a project over its lifetime.
Unseasoned approaches to version control can lead to a code repository being used mainly as a central store for the purpose of backing up code. This attitude was once very common and addressed only one narrow aspect of software development. These days, it is more common for development teams to harness the power of version control and use it as the underlying process for orchestrating and coordinating the development, release, and even the deployment of web projects.
Actions and conventions at the core of a tool such as Git—like committing, branching, making pull requests, and merging—already carry significant meaning and are typically being used extensively during the natural development life cycle. And yet, it is common for monolithic applications to invent their own solutions and workflows to solve similar challenges to those already being addressed by Git.
Rather than doing that, and adding another layer of process control that requires teams to learn new (and analogous) conventions, it is far more productive to use the Git activity to trigger events in our deployment systems.
Many continuous integration (CI)/continuous deployment (CD) products can now work this way. Activities like pushing code to a project’s master branch can trigger a deployment to the production environment.
Instead of limiting Git to a code repository, we should employ it as the mechanism that drives our releases and powers our CI/CD tools. This anchors the actions taken in our development and deployment processes deeply in version control, and consolidates our actions around a well-defined set of software development principles.
When choosing a platform for CI/CD and deployment management, seek out such platforms that support Git-based workflows to get the best from these conventions.
A key to making JAMstack sites go beyond static is dramatically easing their building and deployment processes through automation. By making the task of deploying a new version of a site trivial, or even mundane, we significantly increase the likelihood that it might be regularly updated with content, code fixes, and design iterations throughout its lifetime.
This principle is true for web development projects on all stacks, but in more traditional, dynamic stacks, the lack of good deployment automation is more often overlooked because content updates can still typically be made without a full code redeployment. This can provide a false sense of security on dynamic sites, which become a problem only when an issue is discovered or a more fundamental change is planned for deployment.
By contrast the JAMstack is well suited to scripting and automation during both the build (or site generation) stage and the deployment stage.
Static site generators exist as tools that we run to compile our templates and content into a deployable site. By their very nature, they are already the seed of an automation workflow. If we extend this flow to include some other common tasks, we begin to automate our site generation process at the very beginning of our development process and iterate it over the life of a project.
With a scripted site generation and deployment process in place, we can then explore ways to trigger these tasks automatically based on external events, adding a further level of dynamism to sites that still have no server-side logic at runtime.
The tools used for automating a site build can be a matter of personal preference. Popular options include Gulp, Grunt, or Webpack, but simpler tools like Bash or Make can be just as effective.
Let’s look at an example of some tasks that might be automated in the build script of a JAMstack site to produce a more dynamic site:
Gather external data and content from a variety of APIs.
Normalize the structure external content and stash it in a format suitable for consumption by a static site generator.
Run the static site generator to output a deployable site.
Perform optimization tasks to improve the performance of the generated assets.
Run the test suite.
Deploy output assets to the hosting infrastructure or CDN.
Figure 5-4 shows this process.
This pattern is powerful, especially if it can be triggered by external events. The first step in the pattern we describe here is to gather external content from APIs. The content sources that we are consuming here might be numerous. Perhaps we’ll be using content from a headless CMS or the RSS feed of a news site. Perhaps we’ll be including recent content from a Twitter feed or other social media feeds. We might include content from an external comments system or a digital asset management system. Or all of these!
If we also initiate this automated site build whenever a change occurs to any of the content sources that we will be consuming, the result will be a site that is kept as fresh and as up-to-date as any of its content sources.
An increasing number of tools and services now support publishing and receiving webhooks. Webhooks are URL-based methods of triggering an action across the web. They can create powerful integrations of services that otherwise have no knowledge of one another. With a webhook you can do the following:
Trigger a build in a CI system
Trigger an interactive message in a messaging platform such as Slack
Notify a service that content has been updated in a headless CMS such as Contentful
Execute a function in a Functions as a Service (FaaS) provider such as Amazon Web Services (AWS) Lambda, Google Cloud Functions, or Microsoft Azure Cloud Functions.
By combining these types of tools and building upon an event-driven model to perform automated tasks, things are starting to become a lot less “static” than we might once have expected.
Automation is not purely about doing repeatable tasks quickly. It turns out that humans are pretty bad at doing the same task over and over again without making mistakes. Removing ourselves from the build and deployment process is a good way to reduce errors.
Seek out opportunities in your deployment processes where repetitive tasks can be performed by a script.
It’s very common to plan for a time late in a project to do this kind of automation, but we recommend doing it early. This will do the following:
Create basic documentation of your processes by codifying them
Introduce version control into your deployment process itself
Reduce the risk of human error for every time you deploy
Begin to instill a project culture in which deployments are not heralded as rare, complex, and risky operations
Not only are there tools and services for continuous integration, global deployments, and version control, there are also all sorts of more exotic and ambitious tools and services. The immediacy and affordability of such services is impressive, and they bring incredible breadth and variety in the ways that they can be combined and remixed to create innovative new solutions to once-complex and expensive web development challenges.
One of the key reasons that we need a more descriptive term than “static” to describe JAMstack projects is this rapid growth in the ecosystem rising to support and extend the power of this approach. Once, when we talked about “static” sites, we would most likely rule them out as candidates for specific use cases on the basis of some common stumbling blocks. Requirements like search, forms, notifications, or payments would rule out a static architecture for many sites. Thankfully, as the ecosystem has expanded, solutions to all of these requirements have emerged, allowing developers and technical architects to go far beyond what they might once have considered possible on the JAMstack.
As a way of exploring the concepts a little, let’s talk about some examples of common requirements that used to deter people from pursuing a JAMstack model.
This is the classic and possibly the most common requirement likely to cause a technical architect to decide that static hosting infrastructure would not be sufficient. It’s also perhaps the cause of the most dramatic overengineering.
That may be a surprising statement. After all, form handling is relatively simple and a very common feature on the web. It’s not really something that we associate with overengineering.
When the introduction of this feature demands that a site that might otherwise be delivered directly from a CDN (with no need for logic running at a server) shifts to being hosted on a server complete with all the overhead that incurs, simply to be able to handle HTTP post requests and persist some data, the introduction of an entire server stack represents a huge increase in platform complexity.
A multitude of form-handling services have emerged. Ranging from JavaScript drop-ins to more traditional synchronous HTTP requests with API access, the community has noticed how common this requirement is and found a multitude of ways to deliver without the need for a hosted server. Consider products like Wufoo, Formkeep, Formcarry, or even Google Forms.
A search facility on a site can vary dramatically in complexity. Often search can be simple, but in some situations, search features need to be advanced, including a wide variety of ways to determine results and inspect complex relationships within the content of a site.
There are approaches to handle both ends of this spectrum.
Most static site generators have the capability of producing different representations of the content of a site. We can use this to our advantage by generating a search index of the site at compilation time, which then can be queried using JavaScript directly in the browser. The result can be a lightning-fast live search that responds as you type your search queries.
For more sophisticated search facilities, services like Algolia can provide extensive content modeling and be customized to include data insights (with knowledge of your site’s taxonomy), provide fuzzy matching, and more. From small sites to large enterprises, Algolia has identified the value in providing powerful search and bringing it to all types of technology stacks, including the JAMstack.
There are also ways of scoping the search of the established search engines such as Google or Duck Duck Go to provide search within a given site or domain. This can provide a functional and efficient fallback for when JavaScript is not available, and is the most direct and simple implementation to provide search capabilities to a site.
The need to contact users often introduces complexity to the stack. As with search and forms services, a number of companies have identified this and have established products and services to provide this facility without the need to own the finer details and risks.
Companies like SendGrid and Mailgun can add email messaging and are already trusted by a long list of technology companies that would rather not manage and integrate their own email messaging systems into their sites and services. Going further, companies like Twilio can add sophisticated messaging that goes far beyond just email. SMS, voice, video, and interactive voice response (IVR) are all possible through APIs.
The ability to authenticate a user’s identity and grant different permissions through some form of authentication system underpins a wide variety of features usually associated with a traditional technology stack.
Managing user data, particularly personally identifiable information (PII) requires a great deal of care. There are strict rules on how such information should be retained and secured, with significant consequences for organizations that fail to satisfy suitable compliance and regulatory levels. Outsourcing this capability was not always desirable because this could be seen as giving away precious customer information.
Fortunately, practices have now evolved to allow for the successful decoupling of identity services such that they can be effectively provided by third parties without exposing sensitive customer data. Offloading some of the rigorous compliance and privacy requirements to specialists can bring identity-based features into the reach of projects and companies that otherwise might not be able to deliver and maintain them.
Technologies and identity providers such as OAuth and Auth0 can make this possible and allow for safe client-side authentication and integration with APIs and services. We dig into some examples of this in the next section.
As you can see, we have an expanding set of tools and services available to us. We don’t need to reinvent or reimplement each of them in order to add their capabilities to our sites. Search, user-generated content, notifications, and much more are available to us without the need to own the code for each. But not every one of these services can be integrated into our sites without some logic living somewhere.
Functions as a Service (FaaS; often called serverless functions) can be the ideal solution to this and can be the glue layer that binds many other services together.
By providing us with the means to host functions and invoke them when needed, serverless providers extend the JAMstack even further while liberating us from the need to specify or manage any type of hosting environment.
It is often remarked that being a web developer means that you need to always be learning new things. This pace is not slowing down. Thankfully, however, we don’t all need to be experts in all things.
One of the joys of breaking apart the monoliths of old is that we can hand over the deep domain expertise of aspects of our development projects to those who specialize in providing just those targeted services. At first, it can feel uncomfortable to hand over the responsibility, but as the ecosystem of tools and services matures, that discomfort can be replaced by a refreshing feeling of liberation.
By embracing the specialist services available, we can take advantage of incredibly sophisticated and specialized skills that would be impossible to retain on each and every one of our projects were we to attempt to bring them all in-house.
Naysayers might describe concerns about relinquishing control and accountability. This is a reasonable thing to consider, and we’d be well advised to retain control over the services that we can provide more efficiently, securely, comprehensively, and cost-effectively than external providers. However, the areas where this is truly the case are not as numerous as we might imagine, especially when factoring in things like ongoing support, maintenance, data privacy, and legal compliance. Although it might seem counterintuitive at first, by employing the specialized skills of, say, an identity provider, we can avoid the need to intimately understand, build, and maintain some of the more nuanced areas of providing identity, authentication, and authorization.
Instead, by using external solutions providers to deliver this capability to our projects, we can refocus our efforts away from reimplementing a common (yet, complex and numerous) set of features and functionalities, and focus instead on other areas of our projects where we might be able to truly differentiate and add measurable value.
The core intellectual property of your website is unlikely to reside in the code you write to implement how a user changes their password, uploads their profile picture, or signs in to your site. It is far more likely to reside in your content or your own set of services that are unique to you and your business.
As the ecosystem of tools as service available to JAMstack sites grows larger, we see the approach expand into areas traditionally dominated by monolithic, server-intensive architectures.
Selecting which features of a given project are good candidates for being satisfied by third-party services does require some thought. We’ve mentioned just a few here and there are many more. Considering the total cost of ownership for given services required by your site is important. Do the providers you are considering have suitable terms of service? Can they provide a suitable Service-Level Agreement? How does the cost of that compare to that if you were to build and maintain this for yourself?
More and more regularly, economies of scale and domain expertise mean that providers can be more affordable and dependable than if we were to build comparable solutions ourselves. It is usually easier to imagine the possibilities from embracing and utilizing the expanding ecosystem of tools and services in the context of a real-world example. By exploring how you might design an approach to delivering a real project, with a well-defined set of functional and nonfunctional requirements, you can explore how the JAMstack can be applied to bring significant improvements to an established presence on the web.
In Chapter 6, we do just that. We examine how various third-party services were utilized and combined using serverless functions and various automations. We don’t pick a small or simplistic site, either. Instead, we explore a genuine project on a rich, complex, and for many web developers, beloved website.