Menu
OSS-AMPERE8
OSS-AMPERE8

OSS-AMPERE8

OSS-AMPERE8
  • The OSS-AMPERE8 is the newest and most powerful PCIe 4.0 AI training and inference platform supporting NVIDIA’s HGX-A100 baseboard with 600GB/s Bi-Directional NVLink 3.0 peer-to-peer GPU connections designed to tackle the largest AI models. Supporting eight of the latest A100 NVIDIA GPUs, the OSS-AMPERE8 provides 2.5 PetaFLOPS TensorFloat 32 for AI training and 10 PetaOPS INT4 AI inference of sparse neural networks for the most demanding HPC applications. The OSS-AMPERE8 comes completely NVIDIA GPU Cloud certified with deep learning frameworks, popular pre-trained models, GPU management and monitoring software preinstalled to provide data scientists the fastest time to actionable intelligence. The GPU accelerated server also includes dual high-performance Future Intel® Xeon® Scalable processors and DDR4 memory scalable to 6TB. Ten PCIe Gen4 x16 slots are available for additional expansion and for scale out creating a balanced 1:1 GPU Ratio when using IB or highspeed Ethernet networking. The appliance includes six 2.5” removable NVMe drive bays.

  • Features:

    • 4U Chassis
    • Dual Future Generation Intel® Xeon® Scalable processors
    • Up to 6TB DDR4 LRDIMM System Memory
    • Six 2.5” NVMe SSD drive bays
    • Eight Ampere GPU SXM4 with 600GB/s NVLink 3.0
    • Ten x16 PCIe 4.0 slots
    • Four 2200W Titanium Power Supplies
    • GPU Management and Monitoring pre-installed
    • Software bundle pre-installed

You are now leaving the OSS website