Posts tagged "Ai-gateway"
-
Google’s Gemini 3.1 Pro Preview model is now available through Netlify’s AI Gateway with zero configuration required.
Use the Google GenAI SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the Gemini 3.1 Pro Preview model:
import { GoogleGenAI } from '@google/genai';export default async () => {const ai = new GoogleGenAI({});const response = await ai.models.generateContent({model: 'gemini-3.1-pro-preview',contents: 'How can AI improve my workflow?'});return Response.json(response);};Gemini 3.1 Pro Preview is available for all Function types. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.
Learn more in the AI Gateway documentation.
-
Anthropic’s Claude Sonnet 4.6 model is now available through Netlify’s AI Gateway and Agent Runners with zero configuration required.
Use the Anthropic SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the Claude Sonnet 4.6 model:
import Anthropic from '@anthropic-ai/sdk';export default async () => {const anthropic = new Anthropic();const response = await anthropic.messages.create({model: 'claude-sonnet-4-6',max_tokens: 4096,messages: [{role: 'user',content: 'How can AI improve my coding?'}]});return Response.json(response);};Claude Sonnet 4.6 is available for all Function types and Agent Runners. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.
Learn more in the AI Gateway documentation and Agent Runners documentation.
-
Anthropic’s Claude Opus 4.6 model is now available through Netlify’s AI Gateway and Agent Runners with zero configuration required.
Use the Anthropic SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the Claude Opus 4.6 model:
import Anthropic from '@anthropic-ai/sdk';export default async () => {const anthropic = new Anthropic();const response = await anthropic.messages.create({model: 'claude-opus-4-6',max_tokens: 4096,messages: [{role: 'user',content: 'How can AI improve my coding?'}]});return new Response(JSON.stringify(response), {headers: { 'Content-Type': 'application/json' }});};Claude Opus 4.6 is available for all Function types and Agent Runners. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.
Learn more in the AI Gateway documentation and Agent Runners documentation.
-
OpenAI’s GPT-5.2-Codex model is now available through Netlify’s AI Gateway and Agent Runners with zero configuration required.
Use the OpenAI SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the GPT-5.2-Codex model:
import OpenAI from 'openai';export default async () => {const openai = new OpenAI();const response = await openai.responses.create({model: 'gpt-5.2-codex',input: 'How does AI work?'});return new Response(JSON.stringify(response), {headers: { 'Content-Type': 'application/json' }});};GPT-5.2-Codex is available across Background Functions, Scheduled Functions, and Agent Runners. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.
Learn more in the AI Gateway documentation and Agent Runners documentation.
-
Google’s Gemini 3 Flash Preview is now available through AI Gateway. You can call this model from Netlify Functions without configuring API keys; the AI Gateway provides the connection to Google for you.
Example usage in a Function:
import { GoogleGenAI } from '@google/genai';export default async (request: Request, context: Context) => {const ai = new GoogleGenAI({});const response = await ai.models.generateContent({model: 'gemini-3-flash-preview',contents: 'How does AI work?'});return new Response(JSON.stringify({ answer: response.text }), {headers: { 'Content-Type': 'application/json' }});};This model works across any function type and is compatible with other Netlify primitives such as caching and rate limiting, giving you control over request behavior across your site.
See the AI Gateway documentation for details.
-
OpenAI’s GPT-image-1.5 is now available through AI Gateway. You can call this model from Netlify Functions without configuring API keys; the AI Gateway provides the connection to OpenAI for you.
Example usage in a Function:
import OpenAI from 'openai';const ai = new OpenAI();export default async (req, context) => {const response = await ai.images.generate({model: 'gpt-image-1.5',prompt: 'Generate a realistic image of a golden retriever working in an office',n: 1,size: '1024x1024',quality: 'low',output_format: 'jpeg',output_compression: 80});const imageBase64 = response.data[0].b64_json;const imageBuffer = Uint8Array.from(atob(imageBase64), c => c.charCodeAt(0));return new Response(imageBuffer, {status: 200,headers: {'content-type': 'image/jpeg','cache-control': 'no-store'}});}This model works across any function type and is compatible with other Netlify primitives such as caching and rate limiting, giving you control over request behavior across your site.
See the AI Gateway documentation for details.
-
AI Gateway is now generally available (GA) for all Netlify users. Build AI-powered apps with confidence using our fully managed gateway that handles AI model keys, setup, and monitoring automatically.
For a deeper dive into AI Gateway capabilities, check out our latest blog post.
For a video overview of how the AI Gateway works with a fun demo project, check out our AI Gateway gameshow demo.
For other AI Gateway example projects, check out these videos:
Learn more in our AI Gateway documentation.
Availability
To use AI Gateway, you must have a Credit-based plan or an enabled Enterprise plan.
Learn more about pricing for AI features and monitoring their usage.
To request access to the AI Gateway for an Enterprise plan, reach out to your Netlify account manager.
-
OpenAI’s GPT-5.2 and GPT-5.2-Pro are now available through AI Gateway and Agent Runners. You can call these models from Netlify Functions without configuring API keys; the AI Gateway provides the connection to OpenAI for you.
Example usage in a Function:
import { OpenAI } from "openai";export default async () => {const openai = new OpenAI();const response = await openai.chat.completions.create({model: "gpt-5.2",messages: [{ role: "user", content: "What are the key improvements in GPT-5.2?" }]});return new Response(JSON.stringify(response), {headers: { "Content-Type": "application/json" }});};These models work across any function type and are compatible with other Netlify primitives such as caching and rate limiting, giving you control over request behavior across your site.
See the AI Gateway documentation for details.
Agent Runners support the same models, enabling AI to complete long-running coding tasks. You can learn more in the Agent Runners documentation.
-
OpenAI’s GPT-5.1-Codex-Max model is now available through Netlify’s AI Gateway and Agent Runners with zero configuration required.
Use the OpenAI SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the GPT-5.1-Codex-Max model:
import OpenAI from 'openai';export default async () => {const openai = new OpenAI();const response = await openai.responses.create({model: 'gpt-5.1-codex-max',input: 'What improvements are in GPT‑5.1-Codex-Max?'});return new Response(JSON.stringify(response), {headers: { 'Content-Type': 'application/json' }});};GPT-5.1-Codex-Max is available across Background Functions, Scheduled Functions, and Edge Functions. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.
Learn more in the AI Gateway documentation.
You can also leverage GPT-5.1-Codex-Max with Agent Runners to build powerful AI-powered workflows, including expanded tool use and support for long-running agent tasks. Learn more in the Agent Runners documentation.