VeriSilicon’s Scalable GPGPU-AI Computing IPs Drive AI Innovation in Automotive and Edge Server Solutions

VeriSilicon’s Advanced GPGPU-AI Computing IPs Revolutionize AI Solutions for Automotive and Edge Servers

VeriSilicon (688521.SH) has unveiled the latest advancements in its high-performance, scalable GPGPU-AI computing IPs, empowering next-generation automotive electronics and edge server applications. These cutting-edge IPs combine programmable parallel computing with a dedicated Artificial Intelligence (AI) accelerator to deliver exceptional computing density for demanding AI workloads. From Large Language Model (LLM) inference and multimodal perception to real-time decision-making, VeriSilicon’s solutions are designed to excel in thermally and power-constrained environments, making them ideal for automotive and edge computing use cases.

At the heart of these innovations lies a robust General Purpose Graphics Processing Unit (GPGPU) architecture integrated with a specialized AI accelerator. This combination delivers unparalleled computing capabilities while maintaining flexibility for diverse AI applications. The programmable AI accelerator and sparsity-aware computing engine accelerate transformer-based and matrix-intensive models through advanced scheduling techniques, ensuring optimal performance for complex AI tasks.

Unmatched Scalability and Memory Integration

One of the standout features of VeriSilicon’s GPGPU-AI computing IPs is their ability to scale seamlessly across multiple chips and cards. This multi-chip and multi-card scale-out expansion capability enables system-level scalability, catering to large-scale AI deployments. Additionally, the IPs leverage 3D-stacked memory integration and high-bandwidth interfaces such as LPDDR5X, HBM, PCIe Gen5/Gen6, and CXL, ensuring efficient data transfer and processing even for the most compute-intensive workloads.

The IPs also support a broad spectrum of mixed-precision computing formats, including INT4/8, FP4/8, BF16, FP16/32/64, and TF32, providing developers with the flexibility to optimize performance and energy efficiency based on specific application requirements. This versatility ensures that VeriSilicon’s solutions can handle everything from lightweight edge computing to heavy-duty AI training and inference tasks.

Native Support for Leading AI Frameworks

To streamline development and deployment, VeriSilicon’s GPGPU-AI computing IPs offer native support for popular AI frameworks such as PyTorch, TensorFlow, ONNX, and TVM. These frameworks enable seamless integration into existing workflows, supporting both training and inference operations. Furthermore, the IPs are compatible with General Purpose Computing Language (GPCL), which aligns with mainstream GPGPU programming languages and widely used compilers. This compatibility ensures that VeriSilicon’s solutions meet the rigorous demands of today’s leading LLMs, including models like DeepSeek.

“The demand for AI computing on edge servers—for both inference and incremental training—is growing exponentially,” said Weijin Dai, Chief Strategy Officer, Executive Vice President, and General Manager of the IP Division at VeriSilicon. “This surge requires not only high efficiency but also strong programmability. Our GPGPU-AI computing processors are architected to tightly integrate GPGPU computing with AI accelerators at fine-grained levels. The advantages of this architecture have already been validated in multiple high-performance AI computing systems.”

Optimized for Mixture-of-Experts Models

Recent breakthroughs in AI, particularly in Mixture-of-Experts (MoE) models, have highlighted the need for maximized computing efficiency to handle increasingly demanding workloads. VeriSilicon’s latest GPGPU-AI computing IPs have been specifically enhanced to support MoE models, optimizing inter-core communication and leveraging the abundant bandwidth provided by 3D-stacked memory technologies. Through close collaboration with leading AI computing customers, VeriSilicon has extended its architecture to fully exploit these advanced memory solutions, ensuring superior performance and energy efficiency.

Driving Real-World Adoption Across Industries

VeriSilicon’s commitment to innovation extends beyond hardware and software design. The company actively collaborates with ecosystem partners to drive the mass adoption of its advanced AI computing capabilities. By aligning with industry leaders, VeriSilicon ensures that its solutions remain at the forefront of technological advancements, addressing the evolving needs of automotive and edge server applications.

For the automotive sector, VeriSilicon’s GPGPU-AI computing IPs enable real-time perception, decision-making, and predictive maintenance, all critical for autonomous driving and advanced driver-assistance systems (ADAS). In edge server environments, these IPs facilitate efficient AI inference and incremental training, supporting applications such as smart cities, industrial automation, and IoT devices.

A Future-Proof Solution for AI Challenges

As AI continues to reshape industries, the demand for high-performance, scalable computing solutions will only intensify. VeriSilicon’s GPGPU-AI computing IPs address this challenge head-on by delivering a future-proof architecture that combines programmability, scalability, and energy efficiency. Whether it’s accelerating LLMs, enhancing multimodal perception, or enabling real-time decision-making, these IPs provide the foundation for next-generation AI applications.

By integrating multi-chip scaling, 3D-stacked memory, and advanced scheduling techniques, VeriSilicon ensures that its solutions remain adaptable to the ever-changing landscape of AI workloads. The company’s focus on ecosystem collaboration further solidifies its position as a leader in AI computing, paving the way for widespread adoption of its technologies.

About VeriSilicon

VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP. For more information, please visit: www.verisilicon.com

Source link

Share your love