Rafay Boosts Enterprise AI with NVIDIA BlueField-3 and RTX PRO Servers

Rafay Systems Accelerates Enterprise AI with NVIDIA BlueField-3 DPUs and RTX PRO Servers

Rafay Systems today announced new integrations with NVIDIA BlueField-3 DPUs and NVIDIA RTX PRO Servers featuring the RTX PRO 6000 Blackwell Server Edition GPUs, marking a significant step forward for enterprise-ready AI infrastructure. The combination of Rafay’s Kubernetes Operations Platform with NVIDIA’s advanced hardware and AI software ecosystem gives enterprises and NVIDIA Cloud Partners (NCPs) a fast, secure, and scalable foundation for deploying AI services at production scale. These integrations are fully aligned with the NVIDIA AI Enterprise software platform, including support for NVIDIA NIM microservices, ensuring seamless compatibility and performance optimization.

Simplifying Enterprise AI Deployment

With this latest announcement, Rafay customers can now leverage NVIDIA RTX PRO Servers as part of their AI Factories—dedicated environments designed to accelerate model training, inferencing, and operational efficiency. These servers can be deployed alongside NVIDIA BlueField-3 DPUs, enabling high-performance workloads like F5’s BIG-IP Next for Kubernetes, which provides advanced networking, security, load balancing, and traffic management. By integrating these components, Rafay dramatically reduces the operational complexity traditionally associated with AI infrastructure setup and management.

This latest RTX PRO 6000 integration expands on Rafay’s ongoing collaboration with the NVIDIA DOCA Platform Framework (DPF), which enables a streamlined approach to deploying, orchestrating, and scaling BlueField-accelerated infrastructure. The combined solution empowers enterprises to harness the power of AI without compromising governance, security, or cost efficiency.

Streamlined Operations with the DOCA Platform Framework

Rafay’s integration with DPF allows DevOps and platform teams to stand up the entire DPU hardware and software stack in a single, automated step. Using reusable templates and GitOps-driven workflows, teams can maintain consistency from staging to production, manage lifecycle operations with policy-based governance, and scale DPU-enabled workloads effortlessly across hybrid and edge environments.

The DOCA software framework and DPF introduce prebuilt microservices that support hardware-accelerated networking, distributed routing, real-time threat detection, and service orchestration. These capabilities offload data-plane tasks from the CPU to the DPU, freeing up GPUs to handle AI-intensive workloads and improving overall performance and resource efficiency.

Host-Trusted Security and Workload Isolation

Security is a core advantage of Rafay’s integration with NVIDIA RTX PRO Servers. The RTX PRO 6000 Server Edition offers robust data-center-grade protection, including secure boot with a hardware root of trust, confidential computing support, and MIG-based GPU partitioning for multi-tenant environments.

Through Rafay’s policy-driven automation, enterprises can enforce these controls consistently across their entire GPU fleet—ensuring only trusted GPU firmware runs at boot, segmenting resources per tenant or workload, and maintaining strict compliance with enterprise security frameworks. This architecture supports both zero-trust and multi-tenant environments where secure isolation is critical.

Benefits for Enterprises and Cloud Partners

The combined solution delivers clear and measurable outcomes for enterprises and NVIDIA Cloud Partners:

  • Security & Isolation by Design (BlueField-3 + DOCA): BlueField-3 DPUs accelerate networking, storage, and security workloads, enabling secure multi-tenancy and zero-trust isolation. These capabilities are validated on RTX PRO Servers, making the platform ideal for large-scale AI deployments across hybrid and multi-cloud environments.
  • Universal GPU Platform (RTX PRO Servers): Each RTX PRO Server can be configured with up to eight RTX PRO 6000 Server Edition GPUs, each offering 96GB of GDDR7 memory. With Multi-Instance GPU (MIG) support—up to four 24GB partitions per GPU—enterprises can run diverse workloads, from generative and agentic AI to visual computing and scientific simulations, on a single system with predictable performance and isolation.
  • Business-Ready Consumption Model: Rafay’s platform simplifies operational management with self-service provisioning, enterprise access controls (SSO/RBAC), resource quotas, and chargeback capabilities. Enterprises can deploy NVIDIA AI Enterprise and NIM microservices with full lifecycle management—accelerating the path from pilot testing to full-scale production AI.
Accelerating the Path to AI at Scale

By integrating NVIDIA’s latest data center and GPU technologies, Rafay Systems is giving enterprises the tools to rapidly deploy secure, scalable, and high-performance AI workloads without requiring deep infrastructure expertise. The partnership unites Rafay’s operational automation with NVIDIA’s hardware and software ecosystem, offering a clear path to AI Factory readiness—where businesses can innovate faster, operate more securely, and bring AI products to market with greater speed and control.

Source link :https://www.businesswire.com/

Share your love