Edgecore Networks Launches Next-Generation 102.4T Data Center Switches for AI/ML Clusters

Edgecore’s New AIS Series Delivers Unrivaled Performance and Energy Efficiency for Hyperscale Environments

As AI models continue to grow exponentially, the underlying network infrastructure must evolve to handle unprecedented data throughput and low latency. Edgecore Networks, a leading provider of open networking technology and solutions, is addressing this challenge with the launch of its next-generation 102.4T data center switches: the AIS1600-64O and AIS800-128O. Powered by Broadcom’s latest 3nm Tomahawk® 6 chips, these platforms are designed to meet the massive bandwidth demands of hyperscale AI/ML clusters and advanced cloud fabrics.

“By utilizing our latest Tomahawk 6 silicon, Edgecore is delivering a high-performance, power-efficient platform to the open networking ecosystem that empowers customers to scale their infrastructure with confidence,” said Hasan Siraj, Vice President of Product Management, Core Switching Group, at Broadcom. This launch marks a pivotal moment in the shift toward 1.6T networking and larger 800G-radix networking, reinforcing Edgecore’s commitment to open networking technology and solutions.

Key Insights at a Glance

  • Unrivaled Performance: The AIS series significantly reduces Job Completion Time (JCT) through hardware-based link failover and intelligent load balancing.
  • Energy Efficiency: Built with advanced 3nm process technology, the AIS series offers a significant reduction in power consumption per gigabit.
  • Open Ecosystem: These switches support a broad ecosystem of open-source and commercial software, ensuring flexibility and innovation.
  • High-Density Connectivity: The AIS800-128O model provides the highest density available for massive fabrics connectivity, reducing network latency.

Why Bandwidth and Latency Are Critical for AI/ML Clusters

In the rapidly evolving landscape of artificial intelligence and machine learning, the demand for high-bandwidth, low-latency networks is more critical than ever. Hyperscale AI/ML clusters require massive data throughput to process and analyze vast amounts of information in real-time. Traditional network architectures struggle to keep up, leading to bottlenecks and increased Job Completion Time (JCT). This not only hampers the efficiency of AI models but also increases operational costs and reduces overall performance. The need for advanced, scalable network solutions is urgent and specific to the AI-driven data center of 2026 and beyond.

The Regulatory Clock Is Already Running for AI/ML Infrastructure

Just as a marathon runner must pace themselves to maintain speed over long distances, Edgecore Networks is positioning itself to sustain the relentless demands of AI/ML environments. The new AIS series switches are engineered to provide the necessary bandwidth and low latency, ensuring that data centers can operate efficiently without compromising on performance. By leveraging advanced 3nm process technology, these switches offer significant energy savings, which is crucial for meeting strict power and cooling requirements. This forward-looking approach ensures that data centers can scale to 1.6T performance while maintaining sustainability and operational efficiency.

Edgecore’s AIS Series Redefines Data Center Switching

Edgecore Networks is addressing the bandwidth and latency challenges of hyperscale AI/ML clusters with its new AIS1600-64O and AIS800-128O switches. The AIS1600-64O is a 3RU powerhouse featuring 64 x 1.6T OSFP1600 ports, designed for cutting-edge environments transitioning to 1.6T connectivity. It supports a wide range of breakout configurations, enabling the rising of upcoming 800G NIC generations. The AIS800-128O, a 4RU design, features 128 x 800G OSFP800 ports, providing the highest density available for massive fabrics connectivity and reducing network latency.

“By combining Broadcom’s 3nm Tomahawk 6 chips with Edgecore’s world-class system engineering, we are delivering a platform that is not only faster in port speed but significantly more intelligent in traffic engineering and energy-efficient,” said PoWen Tsai, Head of AI/Cloud PLM & Solution Engineering Business Division at Edgecore Networks. This platform is designed to meet the demands of the AI-driven data center of 2026 and onwards, ensuring that customers can scale their infrastructure with confidence.

Future Outlook

The launch of Edgecore’s AIS series is a significant step forward in the evolution of data center networking. As AI models continue to grow in complexity and size, the need for advanced, scalable network solutions will only increase. Edgecore’s commitment to open networking technology and solutions ensures that customers have the flexibility and innovation needed to stay ahead of the curve. The company’s focus on energy efficiency and high-performance will be crucial as data centers strive to meet the demands of the future.

Conclusion

The launch of Edgecore Networks’ AIS series switches marks a pivotal moment in the transition to 1.6T networking and larger 800G-radix networking. For data center operators and AI/ML clusters, this means unparalleled performance, energy efficiency, and flexibility. How is your organization preparing for this shift? Join the conversation in the comments below.

About Edgecore Networks

Edgecore Networks Corporation is a wholly owned subsidiary of Accton Technology Corporation, the leading network ODM. Edgecore Networks delivers wired and wireless networking products and solutions through channel partners and system integrators worldwide for AI/ML, Cloud Data Center, Service Provider, Enterprise and SMB customers. Edgecore Networks is the leader in open networking providing a full line of open Wi-Fi access points, packet transponders, cell site gateways, aggregation routers and 1G, 10G, 25G, 40G, 100G, 400G and 800G, 1.6T data center switches and offer widest choice of commercial and open-source NOS and SDN software.

For more information, visit www.edge-core.com.

Source link: https://www.businesswire.com/

Share your love