Thanks to the innovators who have built layers of abstraction over the low-level bits and bytes that computers use to operate, the amount of work that is possible by a single developer is now astronomical.
Abstractions, however, are leaky. Sometimes, the lower level of the stack suddenly makes itself known. You could be working on a high level of object-oriented programming, developing a database-driven application with an Object-Relational Mapping (ORM) when suddenly a low-level memory management issue, deep in a compiled dependency, opens up a buffer overflow that a clever malicious user can abuse to hijack the database and completely take over.
Unless we find ways to build solid firewalls between the different layers of the stack, these kinds of issues will always be able to blast through from the lower layers or the far points in the dependency chain.
Fortunately, the JAMstack creates several such firewalls to manage complexity and focus attention on smaller parts in isolation.
With server-side rendering, all possible layers of the stack are always involved in any of the requests a user makes to a site or application.
To build performant and secure software, developers must deeply understand all of the layers that requests flow through, including third-party plug-ins, database drivers, and implementation details of programming languages.
Decoupling the build stage completely from the runtime of the system and publishing prebaked assets constructs an unbreakable wall between the runtime environment end users interact with and the space where code runs. This makes it so much easier to design, develop, and maintain the runtime in isolation from the build stage.
Separating APIs into smaller microservices with a well-defined purpose means that our ability to comprehend the particulars of each service in isolation becomes much simpler. Borders between services are completely clear.
This helps us to establish clearer mental models of the system as a whole while being liberated from the need to understand the inner complexities of each individual service and system. As a result, the burden of development and maintenance of the system can be significantly lightened. The areas where we must direct our attention can be more focused with confidence that the boundaries between the services being used are robust and well defined.
There’s no need to understand runtime characteristics of JavaScript in different browsers as well as the flows of an API at the same time.
In recent years, the complexity of frontend architectures has exploded as browser APIs have evolved and HTML and CSS standards have grown. However, it’s very difficult for the same person to be both an expert in client-side JavaScript performance and server-side development, database query constraints, cache optimization, and infrastructure operations.
When we decouple the backend and frontend, it makes it possible for developers to focus on just one area. You can run your frontend locally with an autorefreshing development server speaking directly to a production API, and make switching between staging, production, or development APIs as simple as setting an environment variable.
And with very small and super-focused microservices, the service layer becomes much more reusable and generally applicable than when we have one big monolithic application covering all of our needs.
This means that we’re seeing a much broader ecosystem of ready-made APIs for authentication, comments, ecommerce, search, image resizing, and so on. We can use third-party tools to outsource complexity and rest easy in the knowledge that those providers have highly specialized teams focusing exclusively on their problem space.
The more that we can prebuild, the more that we can deploy as prebaked markup with no need for dynamic code to run on our servers during a request cycle, and the better off we will be. Executing zero code will always be faster than executing some code. Zero code will always be more secure than even the smallest amount of code. Assets that can be served without any moving parts will always be easier to scale than even the most highly optimized dynamic program.
Fewer moving parts at runtime means fewer things that could fail at a critical moment. And the more distance we can put between our users and the complexity inherent in our systems, the greater our confidence that they will experience what we intended. Facing that complexity at build time, in an environment that will not affect the users, allows us to expose and resolve any problems that might arise safely and without a negative impact.
The only reason to run server-side code at request time ought to be that we absolutely cannot avoid it. The more we can liberate ourselves from this, the better the experience will be for our users, operations teams, and our own ability to understand the projects we’re working on.
There are a variety of costs associated with designing, developing, and operating websites and applications, and they are undoubtedly influenced by the stack we choose. Although financial costs are often most obvious, we should also consider costs to the developer experience, creativity, and innovation.
Any web development project of significant scale will likely include an exercise to estimate the anticipated or target traffic levels and then a plan for the hosting infrastructure required to service this traffic. This capacity planning exercise is an important step for determining how much the site will cost to operate. It is difficult to estimate how much traffic a site will receive ahead of time, so it is common practice to plan for sufficient capacity to satisfy traffic levels beyond even the highest estimates.
With traditional architectures, in which page requests might require activity at every level of the stack, that capacity will need to extend through each tier, often resulting in multiple, highly specified servers for databases, application servers, caching servers, load balancers, message queues, and more. Each of these pieces of infrastructure has an associated financial cost. Previously, that might have included the cost of the physical machines, but today these machines are often virtualized. Whether physical or virtual, the financial costs of these pieces of infrastructure can mount up, with software licenses, machine costs, labor, and so on.
In addition, most web development projects will have more than one environment, meaning that much of this infrastructure will need to be duplicated to provide suitable staging, testing, and development environments in addition to the production environment.
JAMstack sites benefit from a far simpler technical architecture. The burden of scaling a JAMstack site to satisfy large peaks in traffic typically falls on the Content Delivery Network (CDN) that is serving the site assets. Even if we were not to employ the services of a CDN, our hosting environment would still be dramatically simplified when compared to the aforementioned scenario. When the process of accessing the content and data and then populating page templates is decoupled from the requests for these pages, it means that the demands on these parts of the infrastructure is not influential to the number of visitors to the site. Large parts of the traditional infrastructure either do not need to be scaled or perhaps might not even need to exist at all.
The JAMstack dramatically reduces the financial costs of building and maintaining websites and applications.
The size and complexity of a project’s architecture is directly proportional to the quantity of people and range of skills required to operate it. A simplified architecture with fewer servers requires fewer people and far less specialization.
Complex DevOps tasks are largely removed from projects with simpler environments. What were once time-consuming, expensive, and critical procedures—like provisioning new environments and configuring them to faithfully replicate one another—are replaced with maintenance of only the local development environments (required with a traditional approach, anyway) and the deployment pipeline to a productized static hosting service or CDN.
This shift places far more power and control in the hands of developers, who are in possession of an increasingly widespread set of web development skills. This reduces the cost of staffing a web development project (through a reduced demand for some of the more exotic or historically expensive skills) and increases the productivity of the developers employed. By having working knowledge of a larger portion of the stack and fewer discipline boundaries to cross, each developer’s mental model of the project can be more complete and, as a result, each individual can be more confident in their actions and be more productive.
This also lowers the boundaries to innovation and iteration.
When making changes to a site, we need to be confident of the effect our changes might have on the rest of the system.
The JAMstack taked advantage of APIs and an architecture of well-defined services with established interface layers, moving us toward a system that embraces the mantra of “small pieces, loosely joined.” This model of loose coupling between different parts of a site’s technical architecture lowers the barriers to change over time. This can be liberating when it comes to making technical design decisions because we are less likely to be locked into one particular third-party vendor or service, given that we have well-defined boundaries and responsibilities across the site.
This also gives development teams freedom to refactor and develop the particular parts of the site that they control, safe in the knowledge that as long as they honor the structure of the interfaces between different parts of the site, they will not compromise the wider project.
Whereas monolithic architectures stifle the ability to iterate, with tight coupling and proprietary infrastructures, the JAMstack can break down these boundaries and allow innovation to flourish.
Of course, it is still possible to design a JAMstack site in a way that creates a tightly coupled system or that has many interdependencies that make change difficult. But, we can avoid this because the simplification of the required infrastructure leads to greater clarity and understanding of the constituent parts of the site.
We just discussed how reducing the moving parts of the system at runtime leads to greater confidence in serving the site. This also has a significant impact on reducing the cost of innovation. When any complexity can be exposed at build time rather than at runtime, we are able to see the results of our modifications in far greater safety, allowing for greater experimentation and confidence in what will ultimately be deployed.
From simplification of the system to richer mental models and greater empowerment in development teams, with the JAMstack, the cost of innovation can be significantly reduced.
The ability for hosting infrastructure to be able to meet the demands of a site’s audience is critical to its success. We talked about the cost associated with planning and provisioning hosting infrastructure on traditional stacks and then the much simpler demands when using the JAMstack. Now, let’s look more closely at how the JAMstack can scale, and why it is so well suited for delivering sites under heavy traffic loads.
Even with the most rigorous capacity planning, there are times when our sites (hopefully) will experience even more attention than we had planned for, despite all of our contingency and ambition.
The use of the word “hopefully” in the previous sentence is a little contentious. Yes, we want our sites to be successful and reach the widest audience that they can. Yes, we want to be able to report record visitor levels from our analytics and have a wild success on our hands. But infrastructure teams regularly fear that their sites could become too popular. That instead of a healthy, smooth level of demand, they receive huge spikes in traffic at unexpected times. The fear of being at the top of Hacker News features heavily on capacity planning sessions, although it is repeatedly being changed for whichever site or social media property might be the latest source of excessive traffic levels due to a site going viral.
The advent of virtualized servers has been a huge step forward in addressing this challenge, with techniques available for monitoring and scaling virtual server farms to handle spikes in traffic. But again, this adds complexity to technical design and maintenance of a site.
One technique employed by many sites designed and engineered to cope with high traffic levels (both sustained and temporal) is to add a caching layer and perhaps also a CDN to their stack.
This is prudent. And it’s also where the JAMstack excels.
Sites built on traditional stacks need to manage their dynamically created content into their various caching layers and CDNs. This is a complex and specialized set of operations. The complexity and resulting cost has lead to a perception that CDNs are the domain of large, enterprise sites with big teams and big budgets. The tooling they put in place effectively takes what is dynamic and propagates that to caching and CDNs as sets of static assets so that they can be served prebaked without a round trip to the application server.
In other words, dynamic sites add an extra layer of complexity just to allow them to be served with static hosting infrastructure to satisfy demand at scale.
Meanwhile, with the JAMstack we are already there. Our build can output exactly the kinds of assets needed to go directly to the CDN with no need to introduce additional layers of complexity. This is not the domain of large enterprise sites, but within reach of anyone using a static site generator and deploying their sites to one of a multitude of CDN services available.
In addition to the volume of traffic a site might receive, we also must consider the geographical location of our site’s visitors. If visitors are situated at the opposite side of the world to where our servers live, they are likely to experience diminished performance due to latency introduced by the network.
Again, a CDN can help us meet this challenge.
A good CDN will have nodes distributed globally, so that your site is always served to a visitor from the closest server. If your chosen architecture is well suited to getting your site into the CDN, your ability to serve an audience anywhere in the world will be vastly improved.
When designing web architectures, we aim to minimize the single points of failure (SPoFs). When sites are being served directly from a CDN, our own servers no longer act as a SPoF that could prevent a visitor from reaching the site. Moreover, if any individual node within a global CDN fails (in itself a pretty major and unlikely event), the traffic would be satisfied by another node, elsewhere in the network.
This is another example of the benefit of putting distance between build environments and serving environments, which can continue to function and serve traffic even if our own build infrastructure were to have an issue.
Resiliency, redundancy, and capacity are core advantages of delivering sites with the JAMstack.
Performance matters. As the web reaches more of the planet, the developers building it must consider varying network reliability and connectivity speeds. People expect to be able to achieve their online goals in an increasing variety of contexts and locations. And the types of devices that are being used to access the web have never been more diverse in terms of processing power and reliability.
Performance can be interpreted to mean many things. In recent years, the value of serving sites quickly and achieving a rapid time to first meaningful paint and time to interactive has been demonstrated to be critical to user experience (UX), user retention, and conversion. Simply put, time is money. And the faster websites can be served, the more value they can unlock.
There have been many case studies published on this. You can find some staggering results from web performance optimization listed on https://wpostats.com/, some showing marginal performance yielding impressive improvements, like these:
“COOK increased conversion rate by 7% after cutting average page load time by 0.85 seconds. Bounce rate also fell by 7% and pages per session increased by 10%.”
—NCC Group
“Rebuilding Pinterest pages for performance resulted in a 40% decrease in wait time, a 15% increase in SEO traffic, and a 15% increase in conversion rate to signup.”
“BBC has seen that they lose an additional 10% of users for every additional second it takes for their site to load.”
—BBC
While others boast staggering results:
“Furniture retailer Zitmaxx Wonen reduced their typical load time to 3 seconds and saw conversion jump 50.2%. Overall revenue from the mobile site also increased by 98.7%.”
For a long time, performance optimizations for websites focused primarily on the server side. So-called backend engineering was thought to be where serious engineering happened, whereas frontend code was seen by many as a less-sophisticated field. Less attention was paid to performance optimizations in the frontend, client-side code.
This situation has changed significantly. The realization that efficiencies in the architecture and transmission of frontend code could make a staggering difference to the performance of a site has created an important discipline in which measurable improvements can be made. Much proverbial low-hanging fruit was identified, and today there is an established part of the web development industry focused on frontend optimization.
In the frontend, we see incredible attention being given to things like the following:
Minimizing the number of HTTP requests required to deliver an interactive site
Page architectures designed to avoid render blocking or dependencies on external sources
Use of image and visualization techniques that avoid heavy videos and images
Efforts to avoid layout thrashing or slow paint operations
With these, and countless other areas, the work of optimizing for performance in the browser has gained tooling, expertise, and appreciation.
But the performance of the web-serving and application-serving infrastructure continues to be critical, and it’s here where we see efforts to optimize through initiatives like the following:
Adding and managing caching layers between commonly requested resources
Fine-tuning database queries and designing data structures to minimize bottlenecks
Adding compute power and load balancing to increase capacity and protect performance under load
Creating faster underlying network infrastructure to increase throughput within hosting platforms
Each area is important. Good web performance requires a concerted effort at all levels of the stack, and a site will only be as performant as its least performant link in the chain.
Critically though, the JAMstack removes tiers from that stack. It shortens that chain. Now operations on which teams used to spend large amounts of time and money to optimize in an effort to speed them up and make them more reliable don’t exist at runtime at all.
Those tiers and operations that do remain can now receive more scrutiny than before because fewer areas exist to compete for attention and benefit from our best optimization efforts.
However, there is no code that can run faster than zero code. This kind of simplification and separation of the building of sites from the serving of sites yields impressive results.
Consider a typical request life cycle on a relatively simple, but dynamic, site architecture:
A browser makes a request for a page.
The request is handled by a web server that inspects the URL requested and routes the request to the correct piece of internal logic to determine how it should be satisfied.
The web server passes the request to an application server that holds logic on which templates should be combined with data from various sources.
The application server requests data from a database (and perhaps external systems) and renders this into a response.
The response is passed back from the application server to the web server and then on to the browser where it can be displayed to the user.
Figure 3-1 shows this process.
In systems like the one just described, with a dynamic backend, it has become common to introduce additional layers in order to help improve performance. We might see the introduction of a caching layer between the webserver and application server, or indeed between the application server and databases. Many of these layers actually introduce operations that behave as if that part of the system were static but need to be managed and updated by the system over the course of operation.
Now let’s consider the same scenario for the JAMstack, also shown in Figure 3-1, in which the compilation of page views is not carried out on demand, but ahead of time in a single build process:
A browser makes a request for a page.
A CDN matches that request to a ready-made response and returns that to the browser where it can be displayed to the user.
Right away we can see that a lot less happens in order to satisfy each request. There are fewer points of failure, less logical distance to travel, and fewer systems interacting for each request. The result is improved performance in the hosting infrastructure by default. Assets are delivered to the frontend as quickly as possible because they are ready to be served, and they are already as close to the user as is possible thanks to the use of CDN.
This significantly shorter request/response stack is possible because the pages are not being assembled per request; instead, the CDN has been primed with all of the page views that we know our site will need.
Again, we are benefitting from decoupling the generation and population of pages from the time that they are needed. They are ready to be served at the very moment they are requested.
In addition, we can enjoy being forewarned of any issues that might arise during the generation of these pages. Should such an issue occur in a site where the pages were being generated and served on demand, we would need to return an error to the user.
Not so in a JAMstack site for which the generation of pages happens ahead of time during a deployment. There, should a page generation operation fail, the result would be a failed deployment which would never be conveyed to the user. Instead, the failure can be reported and remedied in a timely fashion. Without any users ever being affected or aware.
The reduced complexity of the JAMstack offers another advantage: a greatly improved security profile for our sites.
When considering security, the term surface area is often used to describe the amount of code, of infrastructure, and the scale of the logical architecture at play in a system. Fewer pieces of infrastructure means fewer things to attack and fewer things on which you need to spend effort patching and protecting. Reducing surface area is a good strategy toward improving security and minimizing avenues for attack.
As we have already learned, the JAMstack benefits from a smaller, simplified stack when compared to traditional architectures with their various servers and databases, each needing the ability to interact with one another and exchange data. By removing those layers, we also remove the points at which they interact or exchange data.
Opportunities for compromising the system are reduced. Security is improved.
Beyond improving security by reducing the surface area, JAMstack sites enjoy another benefit. The pieces of infrastructure that remain involved in serving our sites do not include logical code to be executed on our servers at request time. As much as is possible, we avoid having servers executing code or effecting change on our system.
In read-only systems, not only are scale and performance considerations improved, but our security profile is improved.
We can return to the discussion of Wordpress sites as a useful comparison. A Wordpress site combines a database, layers of logic written in PHP, templates for presentation, and a user interface for configuring, populating, and managing the site. This user interface is accessed over the web via HTTP, requiring that a Wordpress site is capable of accepting HTTP POST requests, and consuming and parsing the data submitted to it in those requests.
This opens a popular, and often successful, attack vector to Wordpress sites. Considerable effort has been invested in attempting to secure this route to attack, and adhering strictly to establish good practices can help. But this can’t ensure total security.
Speculative, hostile traffic that probes for poorly secured Wordpress administration interfaces is rife on the internet. It is automated, prolific, and continues to improve in sophistication as new vulnerabilities are discovered.
For those looking to secure a Wordpress site, it is a difficult battle to permanently, confidently win.
The JAMstack is different. Instead of beginning with an architecture that allows write operations for servers to execute code, and then trying to secure this by keeping the doors to such operations tightly guarded, a JAMstack site has no such moving parts or doors to guard. Its hosting infrastructure is read-only and not susceptible to the same types of attack.
Except, we’re cheating here. We’re talking about JAMstack sites as if they were static—both in hosting infrastructure and in user experience. Yet we’re also trying to convey that JAMstack sites can be just as dynamic in experience as many other architectures, so what gives?
We have been simplifying a little and skipping over aspects of JAMstack sites which might interact with other systems. The JAMstack certainly includes all manner of rich interactive services that we might use in our sites. We talk about a growing ecosystem of services at our disposal, so how do we reconcile that with regard to security?
Let’s look.
So far, we’ve focused on the infrastructure that we, as site owners, would need to secure and operate ourselves. But if we are to create dynamic experiences, there will be times when our sites will need to interact with more complex systems. There are some useful approaches available to us to minimize any negative impact on security. Let’s examine some of them.
Recalling that the “A” in JAMstack stands for APIs, it shouldn’t be a surprise that we’d describe advantages in using APIs to interact with external services. By using APIs to allow our sites to interact with a discrete set of services, we can gain access to a wide variety of additional capabilities far beyond what we might want to provide ourselves. The market for capabilities as a service is broad and thriving.
By carefully selecting vendors that specialize in delivering a specific capability, we can take advantage of their domain expertise and outsource to them the need for specialist knowledge of their inner workings. When such companies offer these services as their core capabilities, they shoulder the responsibilities of maintaining their own infrastructure on which their business depends.
This model also makes for better logical separation of individual services and underlying capabilities, which can be advantageous when it comes to maintaining a suite of capabilities across your site. The result is clear separation of concerns in addition to security responsibilities.
Not all capabilities can be outsourced and consumed via APIs. There are some things that you need to keep in-house either because of business value or because external services aren’t able to satisfy some more bespoke requirements.
The JAMstack is well placed to minimize security implication here, too. Through the decoupling of the generation of sites through prerendering and the serving of our sites after they have been generated and deployed, we put distance between public access to our sites and the mechanisms that build it. Earlier, we urged caution when providing access to Wordpress sites backed by a database, but it is highly likely that some parts of our sites might need to access content held in a database. If access to this database is totally abstracted away from the hosting and serving of our sites, and there is no public access to this resource (because it is accessed through a totally decoupled build process), our security exposure is significantly reduced.
JAMstack sites embody these principles:
A minimal surface area with largely read-only hosting infrastructure
Decoupled services exposed to the build environment and not the public
An ecosystem of independently operated and secured external services
All of these principles improve security while reducing the complexity we need to manage and maintain.
Throughout this chapter we’ve talked about the advantages that come from reduced complexity. We’ve talked about how removing common complexity can reduce project costs, improve the ability to scale and serve high volumes of traffic, and improve security.
All of these are good news for a project. They are good for the bottom line. They improve the likelihood of a project being approved to proceed and its chances of prolonged success. But if this comes at the cost of a sound development experience, we have some weighing up to do.
Really? Isn’t that being overdramatic?
Developer experience is an important success factor. The ability for a developer to effectively deliver on the promise of the design, the strategy, and the vision can often be the final piece of the puzzle. It’s where ideas are realized.
A poor development experience can be devastating for a project. It can impede development progress and create maintenance implications that damage the long-term health of a project. It can make it more difficult to recruit and to retain the people you need to deliver a project. It can generate frustrations at slow progress or poor reliability. It can choke off any ability to innovate and make a project great.
A good developer experience can help to create high productivity, a happy team, and innovation at all levels of a project. It is something to strive for.
But of course, we can’t sacrifice user experience for developer experience. Jeremy Keith has deftly articulated this on a number of occasions, notably here:
Given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time.
What we need is an architecture and an approach to development that somehow satisfies both of these criteria. The JAMstack can deliver this for us.
The simplification that we have heralded so many times in this chapter does not come at the cost of clarity. It does not introduce obfuscation or opaque magic. Instead, it employs tools, conventions, and approaches that are both popular and increasingly available among web developers. It embraces development workflows designed to enhance a developer’s ability to be effective and to build things in familiar but powerful environments. It creates strong logical boundaries between systems and services, creating clear areas of focus and ownership. We need not choose between an effective developer experience and an effective user experience. With the JAMstack we can have both.
How? Let’s move on to look at some best practices.