Changelog
-
Astro 6 is out today, and it just works on Netlify on day one. To upgrade, run:
npx @astrojs/upgradeThis will update Astro, the Netlify adapter, and all other official integrations together.
What’s new
Some highlights include:
- Vite 7 and a redesigned dev server — Faster builds and a better dev server built from the ground up.
- Content Layer API — Legacy content collections are fully removed. All collections must now use the Content Layer API.
- Node 22 — Node 18 and 20 are no longer supported.
Check the full upgrade guide for all the details.
Watch out for import.meta.env
One change worth calling out:
import.meta.envvalues are now always inlined at build time in Astro 6. This means if you were relying onimport.meta.envto read environment variables at runtime in your server-side code, those values will be baked into your build output instead.To read environment variables at runtime, use
process.envinstead:- const apiKey = import.meta.env.API_KEY;+ const apiKey = process.env.API_KEY;This is especially important for secrets. If a secret is inlined into your server bundle, it’s no longer secret. The good news: Netlify’s smart secret scanning will automatically detect exposed secrets in your build output and fail the build before it goes live, so you’ll know right away if something slipped through.
Deploy an Astro 6 site on Netlify
If you want to get started with a new site, start with the Astro on Netlify doc, or just click this button:
-
Team Owners can now set a credit limit on AI inference usage to keep Agent Runners and AI Gateway costs within budget.
When your team’s usage hits the credit cap you define, active agent runs stop, new agent runs are blocked, and continued AI Gateway usage is paused to help you keep more of your credit balance.
This is especially useful for teams actively using AI features who want predictable monthly costs without manually watching the meter. Set it once, and Netlify enforces it automatically across your entire team.
Learn more in our docs on limiting AI features.
Agent run credits tracking
You can also track how much each agent run task costs on your agent runs page, shown next to how long the agent took to run the task.
Learn more about AI inference usage and how credits work.
-
OpenAI’s GPT-5.4 and GPT-5.4 Pro models are now available through Netlify’s AI Gateway and Agent Runners with zero configuration required.
Use the OpenAI SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the GPT-5.4 model:
import OpenAI from 'openai';export default async () => {const openai = new OpenAI();const response = await openai.responses.create({model: 'gpt-5.4',input: 'Give a concise explanation of how AI works.',});return Response.json(response);};GPT-5.4 and GPT-5.4 Pro are available for all Function types and Agent Runners. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.
Learn more in the AI Gateway documentation and Agent Runners documentation.
-
Starting today, all completed agent runs show a screenshot of your Deploy Preview. This makes it easier to quickly see the result of an agent run and keep track of agent run sessions without opening the full preview.

Test it out today:
- Go to your Netlify project dashboard.
- On the left, select Agent runs, then choose an existing agent run or start a new one by entering a prompt and selecting Run task.
- At the bottom of your agent run sessions, you’ll find a screenshot of your Deploy Preview. The screenshot is taken from the main page of your project.
Note: If you’ve set up private deploys or password protection, the screenshot will show a sign-in page instead. Learn more about Password protection.
Learn more about getting started with Agent Runners.
-
Google’s Gemini 3.1 Flash Lite Preview is now available through AI Gateway. You can call this model from Netlify Functions without configuring API keys; the AI Gateway provides the connection to Google for you.
Example usage in a Function:
import { GoogleGenAI } from '@google/genai';export default async () => {const ai = new GoogleGenAI({});const response = await ai.models.generateContent({model: 'gemini-3.1-flash-lite-preview',contents: 'How can AI improve my coding?'});return Response.json(response);};This model works across any function type and is compatible with other Netlify primitives such as caching and rate limiting, giving you control over request behavior across your site.
Learn more in the AI Gateway documentation.
-
OpenAI’s GPT-5.3 Instant model is now available through Netlify’s AI Gateway with zero configuration required.
Use the OpenAI SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the GPT-5.3 Instant model:
import OpenAI from 'openai';export default async () => {const openai = new OpenAI();const response = await openai.responses.create({model: 'gpt-5.3-chat-latest',input: 'How does AI work?'});return Response.json(response);};Note: The model API name is
gpt-5.3-chat-latest.GPT-5.3 Instant is available for all Function types. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.
Learn more in the AI Gateway documentation.
-
Linear users can now launch Netlify Agent Runners directly from any Linear issue, allowing you to seamlessly share context with your AI agent of choice. If you have your Linear issue synced with related Slack messages, this context will also be included in your agent run prompt.
Before starting your agent run, you can review and edit your prompt. Next, you can choose which AI agent to use — Claude Code, Google Gemini, or OpenAI Codex. Netlify Agent Runners doesn’t lock you into using a single AI agent so you can pick the agent that fits the task best.
To start an agent run from Linear:
- Go to a Linear issue where you want to trigger an agent run.
- In the top right corner, select Configure coding tools….

- Toggle Netlify Agent Runners on.
- Go back to the issue and in the top right corner, select Open in Netlify Agent Runners.

- Review the prompt and choose your AI agent.
- To start the agent run, select Run task.
Once you’ve enabled this integration from your personal Linear preference settings, any Linear issue you open in your workspace will give you the option to open with Netlify Agent Runners.
Now your entire team can save time and seamlessly share context between Linear and Netlify Agent Runners while keeping this work clearly tracked across Linear and Netlify. Learn more about Agent Runners.
-
Netlify now automatically blocks bot scans targeting PHP paths across all plans — no configuration required.
Previously, these bots generated noise in Observability logs and metrics. They showed up without a
User-Agentheader. Netlify now blocks them at the edge.Since rolling out edge-level blocking on December 28, 2025, Netlify has blocked 2.9 billion of these requests.
-
The Netlify Cache API now has full support for
stale-while-revalidate(SWR). This was a previous limitation of the Cache API that has now been lifted, thanks to a request from a customer.When using
fetchWithCachewith theswroption, background revalidation is handled automatically. If a response is stale but still within the SWR window, it’s served immediately while a fresh response is fetched and cached in the background.import { fetchWithCache, DAY, HOUR } from "@netlify/cache";import type { Config, Context } from "@netlify/functions";export default async (req: Request, context: Context) => {const response = await fetchWithCache("https://example.com/expensive-api", {ttl: 2 * DAY,swr: HOUR,tags: ["product"],});return response;};export const config: Config = {path: "/api/products",};For users who interact directly with
cache.matchandcache.put, a newneedsRevalidationmethod lets you check whether a cached response is stale and trigger background revalidation manually:import { needsRevalidation, cacheHeaders, MINUTE, HOUR } from "@netlify/cache";import type { Config, Context } from "@netlify/functions";const cache = await caches.open("my-cache");export default async (req: Request, context: Context) => {const request = new Request("https://example.com/expensive-api");const cached = await cache.match(request);if (cached) {if (needsRevalidation(cached)) {context.waitUntil(fetch(request).then((fresh) => {const response = new Response(fresh.body, {headers: {...Object.fromEntries(fresh.headers),...cacheHeaders({ ttl: MINUTE, swr: HOUR }),},});return cache.put(request, response);}));}return cached;}const fresh = await fetch(request);const response = new Response(fresh.body, {headers: {...Object.fromEntries(fresh.headers),...cacheHeaders({ ttl: MINUTE, swr: HOUR }),},});context.waitUntil(cache.put(request, response.clone()));return response;};export const config: Config = {path: "/api/data",};Learn more in the Cache API documentation and the caching overview.