Micron and NVIDIA Partner to Accelerate AI Innovation Across Data Centers and Edge Computing
At GTC 2025, Micron Technology, Inc. (Nasdaq: MU) reinforced its leadership in the rapidly evolving artificial intelligence (AI) landscape by unveiling groundbreaking memory solutions designed to power AI workloads from the data center to the edge. As AI continues to drive a paradigm shift in computing, high-performance memory solutions have become essential to unlocking the full potential of GPUs and processors. Micron’s latest innovations, developed in collaboration with NVIDIA, highlight the company’s pivotal role in advancing AI infrastructure.
Micron Leads the Way with HBM3E and SOCAMM Solutions
Micron has solidified its position as the world’s first and only memory provider shipping both HBM3E and SOCAMM™ (small outline compression attached memory module) products for AI servers in data centers. These cutting-edge solutions are integral to NVIDIA’s next-generation platforms, including the NVIDIA GB300 Grace Blackwell Ultra Superchip and the NVIDIA HGX™ B300 NVL16 and GB300 NVL72 systems.
The Micron HBM3E 12H 36GB is designed into the NVIDIA HGX B300 NVL16 and GB300 NVL72 platforms, while the HBM3E 8H 24GB supports the NVIDIA HGX B200 and GB200 NVL72 systems. These deployments underscore Micron’s critical contributions to accelerating AI training and inference workloads in NVIDIA Hopper and Blackwell architectures.
“AI is driving a paradigm shift in computing, and memory is at the heart of this evolution,” said Raj Narasimhan, Senior Vice President and General Manager of Micron’s Compute and Networking Business Unit. “Micron’s contributions to the NVIDIA Grace Blackwell platform yield significant performance and power-saving benefits for AI training and inference applications. HBM and LP memory solutions help unlock improved computational capabilities for GPUs.”
Introducing SOCAMM: A New Standard for AI Memory
Micron’s SOCAMM™ solution, now in volume production, represents a breakthrough in AI memory performance and efficiency. This modular LPDDR5X memory solution delivers accelerated data processing, superior performance, unmatched power efficiency, and enhanced serviceability, making it ideal for the growing demands of AI workloads.
Key features of Micron’s SOCAMM include:
- Fastest Bandwidth: SOCAMMs provide over 2.5 times higher bandwidth than RDIMMs at the same capacity, enabling faster access to large training datasets and more complex models while boosting throughput for inference workloads.
- Smallest Form Factor: At just 14x90mm, SOCAMMs occupy one-third the size of standard RDIMMs, allowing for compact and efficient server designs.
- Lowest Power Consumption: Leveraging LPDDR5X memory, SOCAMMs consume one-third the power of DDR5 RDIMMs, optimizing power efficiency in AI architectures.
- Highest Capacity: SOCAMMs use four placements of 16-die stacks of LPDDR5X memory to deliver a 128GB memory module, the highest-capacity LPDDR5X solution available. This is critical for advancing AI model training and supporting increased concurrent users for inference workloads.
- Optimized Scalability and Serviceability: The modular design and innovative stacking technology improve serviceability and support liquid-cooled server designs. Enhanced error correction and data center-focused test flows ensure reliability and performance.
Industry-Leading HBM Solutions
Micron continues to lead the industry with its HBM3E offerings. Compared to competing solutions, Micron’s HBM3E 12H 36GB provides 50% higher memory capacity within the same cube form factor and consumes up to 20% less power. These advancements make Micron’s HBM solutions indispensable for AI workloads that demand both performance and efficiency.
Looking ahead, Micron is preparing to launch HBM4, which is expected to deliver over 50% better performance than HBM3E. This innovation will further solidify Micron’s position as a leader in AI memory solutions.
Comprehensive Memory and Storage Portfolio for AI
Micron’s portfolio extends beyond memory to include a range of storage solutions optimized for AI workloads. At GTC 2025, Micron showcased its complete lineup of AI-focused memory and storage products, designed to meet the needs of data centers and edge computing applications. Highlights include:
- High-Performance SSDs: The Micron 9550 NVMe and 7450 NVMe SSDs are included on the GB200 NVL72 recommended vendor list. Additionally, Micron’s PCIe Gen6 SSD demonstrated over 27GB/s of bandwidth in interoperability testing, paving the way for the next generation of flash storage.
- Exascale Storage Solutions: The Micron 6550 ION NVMe SSD, with a capacity of 61.44TB, delivers 44 petabytes of storage per rack, 14GB/s bandwidth, and 2 million IOPs per drive within a 20-watt footprint. This makes it ideal for bleeding-edge AI cluster storage solutions.
- Edge and Automotive Applications: Micron is collaborating with ecosystem partners to deliver innovative solutions for automotive, industrial, and consumer applications. For example, Micron’s LPDDR5X memory is integrated into the NVIDIA DRIVE AGX Orin platform, providing increased performance and bandwidth while reducing power consumption. Micron’s 1β (1-beta) DRAM node ensures enhanced quality, reliability, and longevity, meeting rigorous automotive standards, including operating temperatures ranging from -40°C to 125°C.
Driving AI Innovation Through Collaboration
Micron’s success in AI memory and storage is rooted in its deep collaborations with ecosystem partners like NVIDIA. By aligning closely with industry leaders, Micron ensures seamless interoperability and delivers optimized solutions tailored to the unique demands of AI workloads. Whether powering AI training in data centers or enabling on-device AI at the edge, Micron’s innovations are driving the future of intelligent computing.