By Jim Reardon, One Stop Systems
The OpenVPX standards occupy a special place in deployed military compute platforms. Designed as a successor to VME64, the OpenVPX (ANSI 65) and related standards such as SOSA have offered the opportunity to package COTS systems that meet the challenging environmental and electrical requirements of military vehicles across all services. AI Transportable systems incorporate GPUs and switched fabrics that were not envisioned by the existing standard and threaten to force significant new investment in product development before deployment of next generation systems such as C4ISR.
AI Transportable systems, such as the One Stop Systems' Rigel, achieve extraordinary compute performance by use of multiple GPUs and new generation switched fabrics, such as PCIe Gen4 and NVIDIA® NVLink. Today, these elements do not map well into the legacy standard. Among the significant challenges are power supply, conduction cooling, and the limitations of the OpenVPX backplane definition that limit PCIe lane capacity. To preserve the significant industry and government investment in OpenVPX products, a hybrid solution is needed to gain the performance benefits of GPUs, such as NVIDIA® A100 Tensor Core GPUs in next-generation deployments.
With the introduction of Rigel, we can now offer product extensions to support existing OpenVPX and SOSA-compliant sensor I/O, while offering direct access to the performance of Rigel GPUs where it is useful to support new and more demanding applications. Relying on OSS experience with PCIe Gen4 (and now Gen5) expansion, such a combination is now architecturally feasible, offers reduced sensor latency (with the potential to completely bypass host memory buffers), while preserving two decades of industry and government investment in existing SOSA interface solutions.
At the heart of a Rigel system is a PCIe Gen 4 switched fabric that provides the capacity to support the NVIDIA HGX™ module, which itself consists of four NVIDIA® A100 GPUs. Each of the GPUs features an external 16-lane PCIe connection, as well as private NVIDIA NVLink connection to the other GPUs. To manage access to the GPUs, Rigel manages this complexity by implementing a versatile PCIe switched fabric and related management software that allows dynamic or fixed lane routing between GPUs, hosts, memory, and I/O according to application demands. Of course, the PCIe requirements of SOSA-compliant accessories in VPX format is accomplished by extending the PCIe Gen 4 Host Bus Adapter expansion technology developed by OSS.
Through OpenVPX extensions to Rigel, new levels of low-latency sensor acquisition are possible. Direct access to memory in the GPUs without transit through host memory is possible, unlocking new levels of sensor bandwidth as latency is reduced. By way of this architecture, the role of the host processor gives way to the GPUs, which can offer greater computing power, flexible data formats, and of course, parallelism.
In AI Transportables, GPUs and switched fabrics will play a key role in lifting the legacy application performance limits. Combining Rigel and OSS host bus adapter and extender technology, the life of legacy SOSA interfaces can be extended. Placing the managed switch fabric at the heart of these applications, even the GPU elements can be replaced with newer generations, while preserving other elements of the system for reuse.
If you would like to learn more about hybrid Rigel, please be in touch!
Click the buttons below to share this blog post!
When the PCI-SIG formally added support for 675W add-in card devices in the PCI Express Card Electromechanical (CEM) specification in August 2023, NVIDIA’s most powerful CEM GPU, the NVIDIA H100 80GB had a maximum power consumption of 350W. While some devices were starting to push the limits of datacenter thermodynamics – high density systems of many 675W devices seemed like a distant reality. However, with power constraints uncapped and the need for higher performing GPUs skyrocketing, the industry quickly came out with devices taking full advantage of the new specification capability. NVIDIA quickly replaced the H100 80GB with the H100 NVL, increasing power density to 400W. While this small jump was manageable for existing installations, NVIDIA then dove all-in with the H200 NVL released in late 2024 at 600W. The rapid transition from 350W to 600W has put power and cooling technologies in the spotlight in a race to solve this next generation challenge.
The advent of technology has always brought about significant changes to various industries, and the transportation sector is no exception. Among the most transformative innovations in recent years is the development of autonomous vehicles, particularly trucks. The potential for autonomous trucks to revolutionize freight transport is immense, raising the fundamental question: will these technological advancements make human drivers obsolete? To explore this question, we must consider the current state of autonomous driving technology, the economic implications, and the societal impact of removing human drivers from the equation.
The integration of artificial intelligence (AI) into military operations has revolutionized battlefield strategies, decision-making, and operational efficiency. Among these advancements, AI inference nodes deployed directly on soldiers represents a cutting-edge innovation. These nodes, compact computational devices, enable real-time AI processing and analytics, empowering soldiers with enhanced situational awareness, decision support, and operational effectiveness. However, such technology also brings challenges, particularly in power management, size, and weight constraints. This blog delves into the advantages and disadvantages of implementing AI inference nodes on soldiers, focusing on these critical aspects.