laitimes

Supermicro推出机柜级即插即用液冷AI SuperCluster,为AI赋能

Supermicro, Inc. (NASDAQ: SMCI), a full-service IT solutions manufacturer for AI, cloud, storage, and 5G/edge, introduces ready-to-deploy liquid-cooled AI data centers. Designed for cloud-native solutions, the data center accelerates the adoption of generative AI by enterprises across the world with SuperCluster, and is optimized for the NVIDIA AI Enterprise software platform for generative AI development and deployment. With Supermicro's 4U liquid cooling technology, NVIDIA's recently introduced Blackwell GPUs deliver 20 PetaFLOPS of AI performance on a single GPU, delivering 4x the AI training performance and 30x the inference performance compared to older GPUs at additional cost savings. In line with this first-to-market strategy, Supermicro recently launched a comprehensive product line based on the NVIDIA Blackwell architecture, supporting the new NVIDIA HGXTM B100, B200 and GB200 Grace Blackwell Superchips.

"Supermicro continues to lead the industry in building and deploying AI solutions with rack-level liquid cooling," said Jianhou Liang, President and CEO of Supermicro. Data centers are designed with liquid cooling configurations that are nearly free of charge and provide additional value to customers with the advantage of consistently reducing power usage. Our solutions are optimized for NVIDIA AI Enterprise software to meet customer needs across a wide range of industries and deliver global manufacturing capacity with world-scale efficiency. As a result, we're able to shorten delivery times and deliver ready-to-use liquid- or air-cooled compute clusters with NVIDIA HGX H100 and H200, as well as upcoming B100, B200, and GB200 solutions, faster. From liquid cold plates to CDUs to cooling towers, our rack-level comprehensive liquid cooling solutions can reduce data center sustained electricity consumption by up to 40%. "

Please visit www.supermicro.com/ai for more information.

At COMPUTEX 2024, Supermicro showcased upcoming systems optimized for NVIDIA Blackwell GPUs, including a 10U air-cooled system based on the NVIDIA HGX B200 and a 4U liquid-cooled system. In addition, Supermicro will offer 8U air-cooled NVIDIA HGX B100 systems, NVIDIA GB200 NVL72 cabinets with 72 GPUs interconnected via NVIDIA NVLink switches, and new NVIDIA MGX™ systems with NVIDIA H200 NVL PCIe GPUs and the newly announced NVIDIA GB200 NVL2 architecture.

"Generative AI is driving a reset of the entire computing stack, with new data centers being compute-accelerated and optimized for AI with GPUs," said Jensen Huang, founder and CEO of NVIDIA. Supermicro designs state-of-the-art NVIDIA accelerated computing and networking solutions that enable multi-trillion dollar data centers around the world to be optimized for the AI era. "

With the rapid development of large language models (LLMs) and the continuous introduction of open-source models such as Meta's Llama-3 and Mistral's Mixtral 8x22B, it is easier for enterprises to access and use today's most advanced AI models. Simplifying AI infrastructure and providing access in the most cost-effective way is critical to supporting today's rapid AI transformation. Supermicro's cloud-native AI SuperCluster connects the convenience and portability of real-time access to the cloud, and seamlessly moves AI projects of any size from evaluation to real-world operations with NVIDIA AI Enterprise. This provides the flexibility to run and securely manage data anywhere, including on-premise systems or on-premises large data centers.

As enterprises rapidly experiment with generative AI, Supermicro works closely with NVIDIA to ensure a seamless and flexible transition from experimental and evaluation pilots of AI applications to job deployments and large-scale data center AI. This seamless process is made possible by cabinet-level and cluster-level optimizations on the NVIDIA AI Enterprise software platform, which makes the process from initial exploration to scalable AI implementation smoother.

Managed services involve infrastructure selection, data sharing, and generative AI policy stewardship. NVIDIA NIM microservices, part of the NVIDIA AI Enterprise, provide the benefits of managed generative AI and open-source deployments without defects. Its versatile inference runtime environment accelerates the deployment of all types of generative AI, from open-source models to NVIDIA foundational models, through microservices. In addition, NVIDIA NeMoTM enables custom model development through data management, advanced customization, and retrieval-augmented generation (RAG) to enable enterprise-grade solutions. Integrated with Supermicro SuperCluster with NVIDIA AI Enterprise, NVIDIA NIM provides the fastest path to scalable, accelerated deployment of generative AI jobs.

Supermicro目前的生成式AI SuperCluster产品包括:

  • Liquid-cooled Supermicro NVIDIA HGX H100/H200 SuperCluster with 256 H100/H200 GPUs for 5 rack-scale scalable compute units (including 1 dedicated network cabinet)
  • Air-cooled Supermicro NVIDIA HGX H100/H200 SuperCluster with 256 HGX H100/H200 GPUs for 9 rack-scale scalable compute units (including 1 dedicated network cabinet)
  • Supermicro NVIDIA MGX GH200 SuperCluster,具有256个GH200 GraceTM Hopper Superchip,为9个机柜规模的可扩展计算单元(包括1个专用网络机柜)

Supermicro的SuperCluster支持NVIDIA AI Enterprise,其中包括NVIDIA NIM微服务和NVIDIA NeMo平台,能实现端对端生成式AI客制,并针对NVIDIA Quantum-2 InfiniBand以及具有每GPU 400Gb/s网速的新型NVIDIA Spectrum-X以太网络平台优化,可扩展到具有数万个GPU的大型计算丛集。

Supermicro即将推出的SuperCluster产品包括:

  • Supermicro NVIDIA HGX B200 SuperCluster,液冷型
  • Supermicro NVIDIA HGX B100/B200 SuperCluster,气冷型
  • Supermicro NVIDIA GB200 NVL72 或 NVL36 SuperCluster,液冷型

Supermicro's SuperCluster solution is optimized for LLM training, deep learning, and large- and high-volume inference. Supermicro's L11 and L12 validation testing and field deployment services provide a seamless experience for customers. Customers receive plug-and-play expandable units that can be easily deployed in the data center for faster time to benefit.

Supermicro推出机柜级即插即用液冷AI SuperCluster,为AI赋能

即插即用液冷AI SuperCluster

Read on