Skip to content
  • Grok 4.3 on AI Gateway

    Grok 4.3 is now available on Vercel AI Gateway. The model has a December 2025 knowledge cutoff and a 1M token context window. The model has improvements in accuracy, tool calling, and instruction following.

    To use Grok 4.3, set model to xai/grok-4.3 in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'xai/grok-4.3',
    prompt: 'Analyze this dataset and summarize the key trends.',
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • Custom tags available in beta on Vercel Sandbox

    As teams scale isolated environments for AI agents, code generation, or dev workflows, keeping track of which sandbox belongs to whom, and why, becomes critical. Custom tags allow you to organize, filter, and manage Vercel Sandboxes at scale. Each sandbox supports up to five tags.

    Link to headingOrganize by environment, team, or customer

    Tags are flexible by design. Use them to separate staging from production, attribute usage to specific teams, or isolate sandboxes per customer in multi-tenant platforms:

    const sandbox = await Sandbox.create({
    name: "my-sandbox",
    tags: { env: "staging" },
    });

    Link to headingUpdate tags as context changes

    Promote a sandbox from staging to production, reassign ownership, or mark it for cleanup without recreating it:

    await sandbox.update({
    tags: { env: "production", team: "infra" },
    });

    Link to headingEasily track your sandboxes

    Filter sandboxes by any tag to quickly surface the ones that matter. This is useful for dashboards, cleanup scripts, or routing logic that needs to find all sandboxes matching a specific environment or team:

    const productionSandboxes = await Sandbox.list({
    tags: { env: "production" },
    });
    console.log(
    "Production sandboxes:",
    productionSandboxes.sandboxes.map((s) => s.name),
    ); // my-sandbox

    Link to headingUse Cases

    • AI agents at scale: Tag sandboxes by session, user, or agent run to track which execution environment belongs to which workflow.

    • Multi-tenant platforms: Isolate and filter sandboxes per customer or workspace, making billing attribution and cleanup straightforward.

    • Team-level visibility: Attribute sandbox usage to specific teams for cost tracking or capacity planning.

    This feature is in beta and requires upgrading to the beta SDK and CLI packages. Learn more in the documentation.

    Andy Waller

  • Vercel now supports Pro plan in Stripe Projects

    You can now sign up for or upgrade to a Vercel Pro plan directly from Stripe Projects using shared payment tokens (SPTs). Agents and developers can manage plan changes programmatically from the Stripe CLI, without leaving their workflow.

    Link to headingWhat’s new

    • Provision or upgrade to Vercel Pro directly from the Stripe CLI

    • Support for both upgrade and downgrade flows

    • Powered by shared payment tokens for secure, streamlined billing

    This builds on our Stripe Projects launch in developer preview by enabling end-to-end provisioning and billing in one place. Instead of switching between dashboards, you can now handle infrastructure setup and plan management directly from the terminal.

    Link to headingGetting started

    If you’re already using Stripe Projects and have set up billing via stripe projects billing add , you can upgrade your Vercel plan from the CLI simply by running stripe projects add vercel/pro

    If you are new to Stripe Projects, Install the plugin and initialize your project:

    stripe plugin install projects
    stripe projects init my-app
    stripe projects add vercel/pro

    Tony Pan, Marc Brakken, Bhrigu Srivastava

  • Native Deployment Checks are now available

    You can now run lint and typecheck on every Vercel deployment, in parallel with the build. Native Deployment Checks are available to every team and join your existing Deployment Checks alongside GitHub and Marketplace integrations.

    Once added from your project's Build and Deployment settings, Vercel runs the matching script from your package.json on each deployment, and skips the check if no matching script exists. You can mark a check as required to hold the deployment from production until it passes, and choose which environments each check runs on.

    When a Native Deployment Check fails on a pull request, Vercel Agent investigates the failure and suggests a fix you can review and merge.

    +3

    Cody W, Jeffrey A, Shay C, Marcos G, William B

  • Hobby projects now default to 30-day deployment retention

    Starting April 29th, the maximum retention policy for Hobby plans will be capped at 30 days. Deployments outside your retention window will be automatically removed. This excludes your 10 most recent production deployments and any aliased deployments, which continue to be preserved regardless of retention settings.

    Pro and Enterprise plans are not affected.

    Learn more about Deployment Retention.

  • GPT 5.5 on AI Gateway

    GPT-5.5 is now available on Vercel AI Gateway.

    There are 2 variants: GPT-5.5 and GPT-5.5 Pro. Both models are tuned for long-running agentic work across coding, computer use, knowledge work, and scientific research, and are more token-efficient than the previous generation.

    GPT-5.5 is stronger at agentic coding and long-horizon work where the model needs to hold context across a large system and carry changes through the surrounding codebase. Paired with computer-use skills, it can operate real software and turn raw material into documents, spreadsheets, or slide presentations.

    GPT-5.5 Pro is built for demanding, multi-step work where response quality matters more than latency. Early testing shows gains in business, legal, education, data science, and technical research workflows that involve critiquing work over multiple passes and stress-testing arguments.

    To use GPT-5.5, set model to openai/gpt-5.5 or openai/gpt-5.5-pro in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'openai/gpt-5.5', // or 'openai/gpt-5.5-pro'
    prompt:
    `Migrate our user settings page from REST to the new
    GraphQL schema, update the affected components and tests,
    and open a PR with a summary of the changes.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • Deepseek V4 on AI Gateway

    DeepSeek V4 is now available on Vercel AI Gateway.

    There are 2 model variants: DeepSeek V4 Pro and DeepSeek V4 Flash. A 1M token context window is the default across both models.

    DeepSeek V4 Pro focuses on agentic coding, formal mathematical reasoning, and long-horizon workflows. It handles feature development, bug fixing, and refactoring across stacks, with tool use that works across harnesses like MCP workflows and agent frameworks. It also writes clear, well-structured long-form documents.

    DeepSeek V4 Flash performs close to V4 Pro on reasoning and holds up on simpler agent tasks, with a smaller parameter size for faster responses and lower API cost. It's a good fit for high-volume workloads and latency-sensitive use cases.

    To use DeepSeek V4, set model to deepseek/deepseek-v4-pro or deepseek/deepseek-v4-flash in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'deepseek/deepseek-v4-pro', // or 'deepseek/deepseek-v4-flash'
    prompt:
    `Audit this repository for unsafe concurrent access patterns,
    propose a refactor that introduces proper synchronization,
    and open the changes as a PR with a migration plan.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • GPT Image 2 on AI Gateway

    GPT Image 2 is now available on Vercel AI Gateway.

    OpenAI's newest image model supports detailed instruction following, accurate placement and relationships between objects, and rendering of dense text across multiple aspect ratios.

    The model can render fine-grained elements including small text, iconography, UI elements, dense compositions, and subtle stylistic constraints, at up to 2K resolution. Non-English text is also supported and reads coherently.

    GPT Image 2 can produce photos, cinematic stills, pixel art, manga, and other distinct visual styles, with consistency in texture, lighting, composition, and detail. This suits workflows like game prototyping, storyboarding, marketing creative, and medium-specific asset generation.

    To use GPT Image 2, set model to openai/gpt-image-2 in the AI SDK, or try it directly in our model playground.

    import { generateImage } from 'ai';
    const result = await generateImage({
    model: 'openai/gpt-image-2',
    prompt: 'Poster of Vercel AI products, Bauhaus style.',
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.