
Streamlining AI Infrastructure with Kubernetes and Network Automation
Why do telecom operators and enterprises struggle to scale their AI infrastructure efficiently? Mirantis, a leader in Kubernetes-native infrastructure for AI, and Netris, the premier provider of network automation and multi-tenancy for AI infrastructure, have announced a groundbreaking integration. This collaboration automates Kubernetes cluster delivery and data center networking, addressing two major operational bottlenecks: the lack of a standardized path to cluster deployment and the manual, fragmented network provisioning processes that slow infrastructure rollout.
The Complexity of AI Networking
The integration addresses a critical challenge in deploying AI infrastructure: the complexity of networking. According to Shaun O’Meara, Chief Technology Officer at Mirantis, “In deploying infrastructure for AI, the complexity of the networking is one of the primary challenges. Being able to integrate Netris as a building block to manage the network stack enables dynamic network orchestration supporting full-stack multi-tenancy.” This approach, combined with k0rdent AI, ensures a seamless GPU cloud experience.
The Network Bottleneck
Just as a traffic jam can paralyze a city, a manually provisioned and fragmented network can stall AI cloud operations. Netris eliminates this bottleneck by abstracting and automating Ethernet, InfiniBand, NVLink, and BlueField DPUs fabrics. Alex Saroyan, CEO and co-founder of Netris, explains, “Every AI cloud operator hits the same ceiling – a network that is manually provisioned, fragmented, and doesn’t keep pace with compute. Working with Mirantis, that capability is now built into every Kubernetes cluster. Operators get the full stack without the manual work that has historically blocked scale.”
Automating Kubernetes and Network Delivery
Mirantis orchestrates the Kubernetes lifecycle, while Netris delivers network automation, abstraction, and multi-tenancy at the hardware layer. This integration turns GPU clusters into a repeatable, multi-tenant AI cloud product with networking and isolation enforced in hardware and delivered automatically at scale. Key capabilities include:
- Automated, orchestrated delivery of all Kubernetes cluster infrastructure components, including data center networking across NVIDIA Spectrum-X Ethernet, NVIDIA Quantum-X InfiniBand, and NVIDIA NVLink fabrics.
- Automation of data center networking for “east-west” traffic, such as NVIDIA Quantum-X InfiniBand and RoCE, as well as “north-south” traffic that includes data ingress/egress into and out of the data center, to deliver predictable AI performance.
- Network automation, abstraction, and multi-tenancy with DPU-enabled tenant networking for greater tenant density and higher GPU utilization, improving operating costs per cluster.
- Hardware-enforced multi-tenancy with isolation, fault tolerance, and data safety at the switch and DPU level, optimized for regulated and sovereign workloads.
Future Outlook
The integration marks a significant step forward in the evolution of AI infrastructure. As AI workloads become more complex and data-intensive, the need for automated, scalable, and secure solutions will only grow. Mirantis and Netris are poised to lead this transformation, enabling operators to turn bare metal into revenue in days. The next milestone is to expand this integration to more network fabrics and AI platforms, ensuring that the ecosystem remains robust and adaptable.
Conclusion
This collaboration between Mirantis and Netris represents a pivotal moment for telecom operators and enterprises building AI infrastructure. By automating Kubernetes and network delivery, they are addressing critical operational bottlenecks and enabling seamless, scalable AI cloud operations. How is your organization preparing for this shift? Join the conversation in the comments below.
About Mirantis
Mirantis delivers the fastest path to profitable, scalable GPU cloud infrastructure for neoclouds and enterprise AI factories, with full-stack AI infrastructure technology that removes complexity and streamlines operations across the AI lifecycle, from Metal-to-Model. Through k0rdent AI and strategic partnerships with NVIDIA, Mirantis enables organizations to transform GPU cloud economics with production-grade multi-tenancy, intelligent workload orchestration, and automated operations that maximize utilization and profitability. With more than 20 years delivering mission-critical open source cloud technologies, Mirantis provides the end-to-end automation, enterprise security and governance, and deep expertise in Kubernetes and GPU orchestration that organizations need to reduce time to market and efficiently scale cloud native, virtualized, and GPU-powered applications across any environment – on-premises, public cloud, hybrid, or edge.
Mirantis serves many of the world’s leading enterprises and service providers, including Adobe, Ericsson, Inmarsat, MetLife, PayPal, and Societe Generale. Learn more at www.mirantis.com.
About Netris
Netris is the leading provider of network automation and multi-tenancy for AI infrastructure. The Netris NAAM (Network Automation, Abstraction, and Multi-Tenancy) is the most widely deployed platform — trusted by high-growth neoclouds, sovereign AI cloud providers, AI factories, and leading AI platform providers. Netris provides native integrations across the complete AI infrastructure networking stack — Ethernet, InfiniBand, DPUs, and virtual and edge networking. Netris enables operators to get GPU cloud business operational in weeks instead of years, provision tenants immediately with hard network isolation configured automatically, maximize GPU utilization by dynamically reallocating capacity across tenants, ensure network stability, and future-proof AI infrastructure. Learn more at netris.io.
Source link: https://www.businesswire.com/



