Menu

Rack Scale Composable Infrastructure for Mixed Workload Data Centers

April 03, 2018

Rack Scale Composable Infrastructure for Mixed Workload Data Centers

Katie Rivera of One Stop Systems explores how rack-scale composable infrastructure can be utilized for mixed workload data centers.

To read the article, click here.

Last week, OSS unveiled the newest version of its rack-scale GPU Accelerator products, the GPUltima-CI (Composable Infrastructure) at the NVIDIA GPU Technology Conference (GTC 2018). GPUltima-CI allows mixed use datacenters to greatly increase GPU, networking and storage resource utilization compared to similar hyperconverged server solutions.

The GPUltima-CI power optimized rack configuration features up to 32 dual Intel Xeon Scalable Architechture compute nodes, 64 network adapters, 48 NVIDIA Volta GPUs and 32 NVMe drives on a 128Gb PCIe switched fabric allowing for a large number of composable server configurations per rack. Using one or many racks, the OSS solution contains the necessary resources to compose a wide variety of combinations of GPU, NIC and Storage resources required in today's mixed workload data center.





Also in Articles

How the Scarcity of Truckers Hurts Manufacturing (And How to Fix the Problem)
How the Scarcity of Truckers Hurts Manufacturing (And How to Fix the Problem)

August 18, 2022

According to the American Trucking Associations (ATA), at current trends the driver shortage could surpass 160,000 by 2030. ATA estimates that, in the next decade, the industry will have to recruit nearly a million new drivers into the industry to replace drivers exiting the field due to retirements, driver burn-out, compensation and poor benefits. These are the challenges facing transportation executives in securing a robust driver pool.

However, the challenge of driver shortages does not end with the trucking industry. Rather, the scarcity of drivers directly affects the larger manufacturing sector.

To read the full article click here.

Continue Reading

Designing Transportable, High-Performance AI Systems for the Rugged Edge
Designing Transportable, High-Performance AI Systems for the Rugged Edge

June 29, 2022

System design requirements are well understood for high-performance artificial intelligence applications destined to reside in enterprise or cloud data centers. Data centers are specifically designed to provide a clean, cool environment with stable and standard power with no need to worry about vibration or shock loads.

Continue Reading

Scalable Inferencing for Autonomous Trucking
Scalable Inferencing for Autonomous Trucking

June 23, 2022

Most AI inferencing requirements are outside the datacenter at the edge where data is being sourced and inferencing queries are being generated. AI inferencing effectiveness is measured by the speed and accuracy with which answers are provided, and, in many applications, real-time response is required.  To meet the objectives of many AI applications, a very large number of inferencing queries need to be serviced simultaneously.  Often many different inferencing models answering different types of queries need to be coordinating in parallel.

Continue Reading

You are now leaving the OSS website