Menu

Preservation of VPX Investment and AI Transportable GPU Platforms

May 24, 2022

Military Plane

By Jim Reardon, One Stop Systems

The OpenVPX standards occupy a special place in deployed military compute platforms.  Designed as a successor to VME64, the OpenVPX (ANSI 65) and related standards such as SOSA have offered the opportunity to package COTS systems that meet the challenging environmental and electrical requirements of military vehicles across all services.  AI Transportable systems incorporate GPUs and switched fabrics that were not envisioned by the existing standard and threaten to force significant new investment in product development before deployment of next generation systems such as C4ISR

AI Transportable systems, such as the One Stop Systems' Rigel, achieve extraordinary compute performance by use of multiple GPUs and new generation switched fabrics, such as PCIe Gen4 and NVIDIA® NVLink™.  Today, these elements do not map well into the legacy standard.  Among the significant challenges are power supply, conduction cooling, and the limitations of the OpenVPX backplane definition that limit PCIe lane capacity. To preserve the significant industry and government investment in OpenVPX products, a hybrid solution is needed to gain the performance benefits of GPUs, such as NVIDIA® A100 Tensor Core GPUs in next-generation deployments.

With the introduction of Rigel, we can now offer product extensions to support existing OpenVPX and SOSA-compliant sensor I/O, while offering direct access to the performance of Rigel GPUs where it is useful to support new and more demanding applications.  Relying on OSS experience with PCIe Gen4 (and now Gen5) expansion, such a combination is now architecturally feasible, offers reduced sensor latency (with the potential to completely bypass host memory buffers), while preserving two decades of industry and government investment in existing SOSA interface solutions.

At the heart of a Rigel system is a PCIe Gen 4 switched fabric that provides the capacity to support the NVIDIA HGX™ module, which itself consists of four NVIDIA® A100 GPUs.  Each of the GPUs features an external 16-lane PCIe connection, as well as private NVIDIA NVLink connection to the other GPUs.  To manage access to the GPUs, Rigel manages this complexity by implementing a versatile PCIe switched fabric and related management software that allows dynamic or fixed lane routing between GPUs, hosts, memory, and I/O according to application demands.  Of course, the PCIe requirements of SOSA-compliant accessories in VPX format is accomplished by extending the PCIe Gen 4 Host Bus Adapter expansion technology developed by OSS.

Figure 1 – Conceptual Hybrid Rigel

Through OpenVPX extensions to Rigel, new levels of low-latency sensor acquisition are possible.  Direct access to memory in the GPUs without transit through host memory is possible, unlocking new levels of sensor bandwidth as latency is reduced.  By way of this architecture, the role of the host processor gives way to the GPUs, which can offer greater computing power, flexible data formats, and of course, parallelism. 

In AI Transportables, GPUs and switched fabrics will play a key role in lifting the legacy application performance limits.  Combining Rigel and OSS host bus adapter and extender technology, the life of legacy SOSA interfaces can be extended. Placing the managed switch fabric at the heart of these applications, even the GPU elements can be replaced with newer generations, while preserving other elements of the system for reuse.

If you would like to learn more about hybrid Rigel, please be in touch!

Click the buttons below to share this blog post!

Return to the main Blog page




Leave a comment


Also in One Stop Systems Blog

Ponto PCIe Accelerator Infographic
Ponto PCIe Accelerator Infographic

September 10, 2025 3 Comments

Continue Reading

Leveraging NVIDIA MGX to Accelerate Rugged Edge System Design
Leveraging NVIDIA MGX to Accelerate Rugged Edge System Design

July 22, 2025 9 Comments

The rugged edge computing landscape is becoming increasingly complex with new generations of technologies, such as the latest AI focused GPUs, releasing annually rather than every 2-3 years. Whether the end application is commercial or defense, rugged edge servers must not only deliver cutting-edge compute performance but also withstand extreme environmental conditions.

Continue Reading

Tackling the Thermal Challenge of 600W+ Devices in High-Density Computing Systems
Tackling the Thermal Challenge of 600W+ Devices in High-Density Computing Systems

June 02, 2025 8 Comments

 

When the PCI-SIG formally added support for 675W add-in card devices in the PCI Express Card Electromechanical (CEM) specification in August 2023, NVIDIA’s most powerful CEM GPU, the NVIDIA H100 80GB had a maximum power consumption of 350W.  While some devices were starting to push the limits of datacenter thermodynamics – high density systems of many 675W devices seemed like a distant reality.  However, with power constraints uncapped and the need for higher performing GPUs skyrocketing, the industry quickly came out with devices taking full advantage of the new specification capability.  NVIDIA quickly replaced the H100 80GB with the H100 NVL, increasing power density to 400W.  While this small jump was manageable for existing installations, NVIDIA then dove all-in with the H200 NVL released in late 2024 at 600W.  The rapid transition from 350W to 600W has put power and cooling technologies in the spotlight in a race to solve this next generation challenge.

Continue Reading

You are now leaving the OSS website