GPUltima

GPUltima

 

GPUltima

A Petaflop-in-a-Rack Networked
GPU Cluster, the GPUltima has 10
times more cores, 90% less power
and 95% less space* than other
petaflop compute solutions. OSS
can provide subsets of the GPUltima
depending on customer needs.
Versus traditional 1 petaflop clusters; based on HPC 500 listing/data. 

 
 

The GPUltima is a high-density, fully integrated computer cluster that is purpose-built for high performance computing (HPC) applications like financial trading, deep learning or machine learning, oil and gas exploration, virtual desktop infrastructure (VDI), defense and security, and academia. Where conventional computer cluster systems use CPUs as the primary data processor, the GPUltima employs numbers of GPU cards, providing 10 times the performance by adding thousands more cores. In addition, the GPUltima consumes about 90% less power than conventional systems and occupies 95% less rack space. The GPUltima petaflop edition requires only a single 42U rack and requires only 56kW of power.

The GPUltima is completely 'application-ready', so that all the customer has to do is to add his application software to the servers and the system is ready to begin processing. The unique cluster management and monitoring software and the service and support packages that accompany the GPUltima make this a user-friendly system that allows the customer to begin his work without having to configure the cluster.

The GPUltima is comprised of 'compute nodes'. Each compute node contains sixteen GPU cards and one or two dual socket servers, each with dual 'Broadwell' processors, producing 139 Teraflops of performance using NVIDIA Tesla K80s. The sixteen GPU cards communicate with each other through Infiniband EDR 100Gbs connections through a 1U Infiniband switch. The server(s) are cabled to the GPU enclosure through 128Gbs PCIe. The server communicates to the internet through Ethernet. Additional nodes can be added as needed. For more than one server an Ethernet switch is added for external communication.

The clustering software allows complete manageability of the nodes as well as the individual GPU cards. The GPU monitoring and management software provides a 'single-pane-of-glass' management of the hardware, the operating system, HPC software, and users. With the Cluster Manager, system administrators can quickly get clusters up and running and keep them running reliably throughout their lifecycle — all with the ease and elegance of a full-featured, enterprise-grade cluster manager.

GPUltima Diagram

GPUltima Diagram

GPUltima Diagram

 

 

 

GPUltima One Node

6U Rack System with 139.8 TFlops of Networked GPU compute power.

 
 

When one node is deployed the 16 GPU cards communicate through a combination of EDR 100Gbs Infiniband and 128Gbs PCIe.

 Download the Datasheet
 Download the Brochure

 

PRODUCT HIGHLIGHTS

  • 6U Rackspace

  • 16 GPU cards

  • EDR 100Gb Infiniband Switch

  • GPU Cluster Management and Monitoring

  • 24/7 Remote Management Support

 

GPUltima Two Node

12U Rack System with 279.6 TFlops of Networked GPU compute power.

 
 

When two nodes are deployed the 32 GPU cards communicate through a combination of EDR 100Gbs Infiniband and 128Gbs PCIe. All servers are connected to an Ethernet switch for outside communication.

 Download the Datasheet
 Download the Brochure

 

PRODUCT HIGHLIGHTS

  • 12U Rackspace

  • 32 GPU cards

  • EDR 100Gb Infiniband Switch

  • 52-Port Ethernet 1Gb Managed Switch

  • GPU Cluster Management and Monitoring

  • 24/7 Remote Management Support

 

GPUltima Three Node

17U Rack System with 419.4 TFlops of Networked GPU compute power.

 
 

When three nodes are deployed the 48 GPU cards communicate through a combination of EDR 100Gbs Infiniband and 128Gbs PCIe. All servers are connected to an Ethernet switch for outside communication.

 Download the Datasheet
 Download the Brochure

 

PRODUCT HIGHLIGHTS

  • 17U Rackspace

  • 48 GPU cards

  • EDR 100Gb Infiniband Switch

  • 52-Port Ethernet 1Gb Managed Switch

  • GPU Cluster Management and Monitoring

  • 24/7 Remote Management Suppor

 

GPUltima Four Node

22U Rack System with 559.2 TFlops of Networked GPU compute power.

 
 

When three nodes are deployed the 64 GPU cards communicate through a combination of EDR 100Gbs Infiniband and 128Gbs PCIe. All servers are connected to an Ethernet switch for outside communication.

 Download the Datasheet
 Download the Brochure

 

PRODUCT HIGHLIGHTS

  • 22U Rackspace

  • 64 GPU cards

  • EDR 100Gb Infiniband Switch

  • 52-Port Ethernet 1Gb Managed Switch

  • GPU Cluster Management and Monitoring

  • 24/7 Remote Management Support