Liqid Introduces Advanced Composable Infrastructure to Accelerate Enterprise AI in On-Premises and Edge Data Centers

Liqid Unveils Next-Generation Composable Infrastructure to Revolutionize Enterprise AI in On-Premises and Edge Environments

In a groundbreaking move to address the growing demands of artificial intelligence (AI) and other compute-intensive workloads, Liqid, the global leader in software-defined composable infrastructure, has announced a suite of innovative solutions designed to scale enterprise AI capabilities for on-premises data centers and edge environments. These advancements deliver high-performance, agile, and efficient scaling options for GPUs, memory, and storage, enabling organizations to optimize their infrastructure while minimizing costs associated with underutilized resources, power consumption, and cooling demands.

Optimizing AI Metrics: Tokens per Watt and Tokens per Dollar

As AI becomes a cornerstone of business strategy, enterprises require infrastructure solutions that can keep pace with evolving demands. Liqid’s software-defined composable infrastructure platforms are uniquely designed to provide granular scale-up and seamless scale-out capabilities, optimizing two critical AI metrics: tokens per watt and tokens per dollar. By eliminating static inefficiencies and transitioning to precise, on-demand resource allocation, Liqid’s solutions boost throughput while cutting power consumption by up to 2x. This approach maximizes return on investment (ROI) for AI infrastructure, ensuring that every watt and dollar spent delivers optimal performance.

To support enterprises in maximizing their AI initiatives and managing compute-hungry applications such as virtual desktop infrastructure (VDI), high-performance computing (HPC), and rendering, Liqid is introducing several key innovations:

Liqid Matrix® 3.6: Unified Software Interface for Real-Time Resource Management

At the heart of Liqid’s portfolio is Liqid Matrix® 3.6, a powerful software platform that provides a unified interface for managing composable GPU, memory, and storage resources in real time. This intuitive solution empowers IT teams to adapt quickly to dynamic and diverse workloads, achieving 100% balanced utilization across data center and edge environments.

Liqid Matrix seamlessly integrates with orchestration platforms like Kubernetes, VMware, and OpenShift, job schedulers such as Slurm, and automation tools like Ansible. This integration enables resource pooling and the creation of right-sized AI factories across the entire infrastructure, simplifying operations and driving efficiency.

PCIe Gen5 Composable GPU Platform: Next-Gen Scale-Up Performance

Liqid’s new EX-5410P, a 10-slot PCIe Gen5 composable GPU chassis, supports modern high-power GPUs, including NVIDIA H200, RTX Pro 6000, and Intel Gaudi 3, as well as accelerators, FPGAs, NVMe drives, and more. Part of Liqid’s Gen5 PCIe fabric, the EX-5410P leverages ultra-low-latency, high-bandwidth interconnects to deliver unparalleled performance, agility, and efficiency.

Liqid offers two composable GPU solutions tailored to specific needs:

  • UltraStack: Delivers peak performance by dedicating up to 30 GPUs to a single server.
  • SmartStack: Provides flexible resource sharing by pooling up to 30 GPUs across as many as 20 server nodes.

These solutions enable higher density and greater performance per rack unit while reducing power and cooling costs. Organizations can also mix and match accelerators to tailor performance to specific workloads, ensuring maximum flexibility.

Breakthrough Composable Memory Solution: Powering Memory-Hungry Applications

Liqid’s new EX-5410C, built on the CXL 2.0 standard, represents a breakthrough in composable memory technology. Designed to support memory-hungry applications such as large language models (LLMs) and in-memory databases, the EX-5410C disaggregates and pools DRAM, allowing memory to be allocated dynamically based on workload demands.

Powered by Liqid Matrix software, this solution ensures better utilization, reduces memory overprovisioning, and accelerates performance for memory-bound AI workloads. Liqid’s composable memory solution is the industry’s first fully disaggregated, software-defined offering, supporting up to 100TB of memory. Like its GPU offerings, Liqid provides two configurations:

  • UltraStack: Dedicates up to 100TB of memory to a single server for uncompromised performance.
  • SmartStack: Dynamically pools and shares up to 100TB of memory across as many as 32 server nodes.

Ultra-Performance NVMe Storage: Unmatched Bandwidth and Capacity

Liqid’s LQD-5500 NVMe storage device sets a new standard for speed, scalability, and reliability. Offering 128TB of capacity, 50GB/s bandwidth, and over 6 million IOPS, the LQD-5500 combines ultra-low latency with high performance in a standard NVMe form factor. Ideal for AI, HPC, and real-time analytics, this solution delivers enterprise-grade performance to meet the most demanding workloads.

A New Approach to AI Infrastructure

“With generative AI moving on-premises for inference, reasoning, and agentic use cases, it’s pushing data center and edge infrastructure to its limits,” said Edgar Masri, CEO of Liqid. “Enterprises need a new approach to meet these demands and be future-ready in terms of supporting new GPUs, new LLMs, and workload uncertainty, without exceeding power budgets. With today’s announcement, Liqid advances its leadership in delivering the performance, agility, and efficiency needed to maximize every watt and dollar as enterprises scale up and scale out to meet unprecedented demand.”

Revolutionizing Resource Allocation

Liqid’s solutions create disaggregated pools of GPUs, memory, and storage, enabling high-performance, agile, and efficient on-demand resource allocation. Compared to traditional GPU-enabled servers, Liqid’s composable infrastructure outperforms in scale-up scenarios while delivering unmatched agility and flexibility for scale-out demands. Its open, standards-based foundation reduces complexity, space requirements, and power overhead, addressing the challenges typically associated with scaling multiple high-end servers.

Driving the Future of Enterprise AI

Liqid’s latest innovations position the company at the forefront of the AI infrastructure revolution. By delivering unmatched performance, agility, and efficiency, Liqid empowers enterprises to tackle the most demanding AI workloads while optimizing costs and sustainability. As organizations continue to navigate the complexities of AI-driven transformation, Liqid’s composable infrastructure solutions provide the tools needed to stay competitive in an increasingly data-centric world.

About Liqid

Liqid is the leader in software-defined composable infrastructure, delivering flexible, high-performance, and efficient on-premises datacenter and edge solutions for AI inferencing, VDI, and HPC, as well as solutions for financial services, higher education, healthcare, telecommunications service providers, media & entertainment, and government organizations.

Liqid enables customers to manage, configure, reconfigure, and scale essential compute, accelerators (GPU, DPU, TPU, FPGA), memory, storage, and networking into physical bare metal server systems in seconds. Liqid customers can optimize their IT infrastructure and achieve up to 100% GPU and memory utilization for maximum tokens per watt and dollar.

Source link

Share your love