Katie Rivera of One Stop Systems explores how rack-scale composable infrastructure can be utilized for mixed workload data centers.
To read the article, click here.
Last week, OSS unveiled the newest version of its rack-scale GPU Accelerator products, the GPUltima-CI (Composable Infrastructure) at the NVIDIA GPU Technology Conference (GTC 2018). GPUltima-CI allows mixed use datacenters to greatly increase GPU, networking and storage resource utilization compared to similar hyperconverged server solutions.
The GPUltima-CI power optimized rack configuration features up to 32 dual Intel Xeon Scalable Architechture compute nodes, 64 network adapters, 48 NVIDIA Volta GPUs and 32 NVMe drives on a 128Gb PCIe switched fabric allowing for a large number of composable server configurations per rack. Using one or many racks, the OSS solution contains the necessary resources to compose a wide variety of combinations of GPU, NIC and Storage resources required in today's mixed workload data center.
OSS released its flagship artificial intelligence (AI) transportable compute server, the Rigel Edge Supercomputer, according to the company this month.
With a dense form factor, the Rigel is designed for deployments in "tight spaces" at the edge, such as an equipment bay of autonomous vehicles, within mobile command centers, under seats of helicopters, or in an aircraft equipment bay.
Bridging the gap between sensors and high-performance compute power is a growing challenge, especially in systems where quick, complex decisions are vital.
The concept of "the edge" in embedded systems has taken on new urgency in the federal space. "The edge" can be defined as "where it's happening" and in government systems, that's the field.