Posts tagged "Ai-gateway"
-
OpenAI’s GPT-5.1-Codex-Max model is now available through Netlify’s AI Gateway and Agent Runners with zero configuration required.
Use the OpenAI SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the GPT-5.1-Codex-Max model:
import OpenAI from 'openai';export default async () => {const openai = new OpenAI();const response = await openai.responses.create({model: 'gpt-5.1-codex-max',input: 'What improvements are in GPT‑5.1-Codex-Max?'});return new Response(JSON.stringify(response), {headers: { 'Content-Type': 'application/json' }});};GPT-5.1-Codex-Max is available across Background Functions, Scheduled Functions, and Edge Functions. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.
Learn more in the AI Gateway documentation.
You can also leverage GPT-5.1-Codex-Max with Agent Runners to build powerful AI-powered workflows, including expanded tool use and support for long-running agent tasks. Learn more in the Agent Runners documentation.
-
You can now use AI Gateway in local development with just
npm run devwhen using the Netlify Vite Plugin. Previously, AI Gateway’s auto-configured environment variables only worked when runningnetlify dev, which added friction for developers using Vite-powered frameworks like Astro.With this update, AI Gateway environment variables are automatically populated when running your Vite development server directly. This means you can run standard framework commands without extra steps:
# Works with any Vite-based frameworknpm run devThis is part of our ongoing effort to streamline the developer experience for Vite frameworks. Modern frameworks like Astro let you specify Netlify as your deployment target and handle everything automatically—now AI Gateway works the same way.
This change also improves compatibility with AI coding agents and other automated workflows that expect standard development commands to work without additional configuration.
Learn more about the Netlify Vite Plugin and AI Gateway in the documentation.
-
Anthropic’s Claude Opus 4.5 model is now available through Netlify’s AI Gateway with zero configuration required.
Use the Anthropic SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the Claude Opus 4.5 model:
import Anthropic from "@anthropic-ai/sdk";export default async () => {const anthropic = new Anthropic();const response = await anthropic.messages.create({model: "claude-opus-4-5-20251101",max_tokens: 4096,messages: [{role: "user",content: "Give me pros and cons of using claude-opus-4-5-20251120 over other models."},],});return new Response(JSON.stringify(response), {headers: { "Content-Type": "application/json" }});}Claude Opus 4.5 is available across Background Functions, Scheduled Functions, and Edge Functions. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.
Learn more in the AI Gateway documentation.
You can also access the newest Claude Code capabilities via Agent Runners, including expanded tool use and support for long-running agent workflows. Learn more in the Agent runner documentation.
-
Google’s Gemini 3 Pro Preview model is now available through Netlify’s AI Gateway and Agent Runners with zero configuration required.
Use the Google GenAI SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the Gemini 3 Pro Preview model:
import { GoogleGenAI } from "@google/genai";export default async (request: Request, context: Context) => {const ai = new GoogleGenAI({});const response = await ai.models.generateContent({model: "gemini-3-pro-preview",contents: "Explain why gemini 3 is better than other models",});return new Response(JSON.stringify({ answer: response.text }), {headers: { "Content-Type": "application/json" }});};Gemini 3 is available across Background Functions, Scheduled Functions, and Agent Runners. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.
Learn more in the AI Gateway documentation and Agent Runners documentation.
-
OpenAI’s latest GPT-5.1 model (
gpt-5.1) is now available through Netlify’s AI Gateway. This model brings enhanced performance and efficiency with no additional setup required.Use the OpenAI SDK directly in your Netlify Functions without managing API keys. The AI Gateway handles authentication, rate limiting, and caching automatically. Here’s an example using the GPT-5.1 model:
import { OpenAI } from "openai";export default async () => {const openai = new OpenAI();const response = await openai.chat.completions.create({model: "gpt-5.1",messages: [{role: "user",content: "Compare GPT-5.1's improvements over GPT-5 Pro"}]});return new Response(JSON.stringify(response), {headers: { "Content-Type": "application/json" }});};The GPT-5.1 model works seamlessly across Edge Functions, Background Functions, and Scheduled Functions. You also get access to Netlify’s advanced caching primitives and built-in rate limiting.
Learn more in the AI Gateway documentation.