Serverless Inference for GPU & Sovereign Cloud Providers

Deliver Generative AI (GenAI) models as a service in a scalable, secure, and cost-effective way–and unlock high margins–with Rafay’s turnkey Serverless Inference offering.

Available to Rafay customers and partners as part of the Rafay Platform, Serverless Inference empowers NVIDIA Cloud Partners (NCPs) and GPU Cloud Providers (GPU Clouds) to offer high-performing, Generative AI models as a service, complete with token-based and time-based tracking, via a unified, OpenAI-compatible API.

With Serverless Inference, developers can sign up with regional NCPs and GPU Clouds to consume models-as-a-service, allowing them to focus on building AI-powered apps without worrying about managing infrastructure complexities.

Serverless Inference is available AT NOT ADDITIONAL COST to Rafay customers and partners.

Key Capabilities of Serverless Inference

Rafay’s Serverless Inference offering brings on-demand consumption of GenAI models to developers, with scalability, security, token- or time-based billing, and zero infrastructure overhead.

Plug-and-Play LLM Integration

Instantly deliver popular open-source LLMs (e.g., Llama 3.2, Qwen, DeepSeek) using OpenAI-compatible APIs to your customer base—no code changes required.

Serverless Access

Deliver a hassle-free, serverless experience to your customers looking for the latest and greatest GenAI models.

Token-Based Pricing & Visibility

Flexible usage-based billing with complete cost transparency and historical usage insights.

Secure & Auditable API Endpoints

HTTPS-only endpoints with bearer token authentication, full IP-level audit logs, and token lifecycle controls.

Why DIY when you can FLY with Rafay's Serverless Inference offering?

Pre-optimized interference templates

Intelligent auto-scaling of GPU resourcesread more

Enterprise-grade security and token authentication

Built-in observability, cost tracking, audit logs

“We are able to deliver new, innovative products and services to the global market faster and manage them cost-effectively with Rafay.”

Joe Vaughan
Joe Vaughan
Chief Technology Officer
,
MoneyGram
White paper

Hybrid Cloud Meets Kubernetes

Learn how to Streamline Kubernetes Ops in Hybrid Clouds with AWS & Rafay

Start a conversation with Rafay

Talk with Rafay experts to assess your infrastructure, explore your use cases, and see how teams like yours operationalize AI/ML and cloud-native initiatives with self-service and governance built in.