Menu

When VPX Can’t Keep up with the Threat

November 14, 2023

When VPX Can’t Keep up with the Threat

By Tom Fries, Government/Defense Sales Manager

Generative, Inferencing and Natural Language Processing (NLP) AI applications have all seen explosive growth (NVIDIA market cap growth 3200%) and rapid proliferation in the last two years across commercial market segments such as Automotive, Aerospace, Manufacturing, Transportation, Banking, and Entertainment. The commercial consumption of AI has quite directly driven the need for High Performance Compute (HPC) solutions, and, specifically HPC solutions that operate in some of the most extreme environments on earth. By comparison, DOD compute systems for air/ground platforms, while significantly ruggedized for austere environments, lack the compute power to employ such technological advancements in AI…or worse, lack the capability to counter AI threats employed by adversaries using commercially available SW/HW.

Currently, the DOD, with each branch having its own AI initiatives, is focusing on requirements for a common hardware interface solution (e.g. VPX, CMFF, OMS) providing a mission module approach for integrating or updating C5ISR capabilities on Air/Ground platforms. The common interface framework, while technically feasible for integrating multiple single use “black boxes” into a single chassis with a common interface, is unable to support the nanoseconds of compute speed and petaflops of data transfer needed to run AI applications. Couple the lack of processing speed/power with the current operational threat’s capabilities to extend the distance of the kill chain, and you can see the dangerous technological gap that exists in most of the DOD’s Air, Ground, Surface and Subsurface platforms.

VPX

The Power of GPUs in AI Computing

The transition from CPU to GPU core computing in HPC and AI/ML applications is the fundamental driving force behind today’s most cutting-edge applications. “AI Transportable” systems achieve extraordinary compute performance by use of multiple GPUs and new generation switched fabrics, such as PCIe Gen5 and NVIDIA’s NVLink. Today, these elements do not map well into the OpenVPX standard. To preserve the significant government investment in OpenVPX products, a hybrid solution is needed to gain the performance benefits of GPUs (e.g. NVIDIA H100) to realize the operational capabilities needed to employ and counter AI technologies.

At the heart of an AI Transportable system is a PCIe Gen 5 switched fabric that provides the capacity to support an NVIDIA HGX H100 4 or 8-GPU backplane. Each of the GPUs feature an external 16-lane PCIe connection, as well as meshed private NVLink connection to the other GPUs. A complex and versatile PCIe switched fabric and related management software allow dynamic or fixed lane routing between GPUs, hosts, memory, and I/O according to application demands. To achieve the SOSA-compliant requirement in a VPX format, a unique PCIe Gen 5 Host Bus Adapter expansion technology is utilized.

Looking Forward

Through OpenVPX extensions to AI Transportables, new levels of low-latency sensor acquisition are possible. Direct access to memory in the GPUs without transit through host memory is possible, unlocking new levels of sensor bandwidth with reduced latency. The architecture shifts the role of the host processor to the GPUs which offer greater computing power and the combined enhanced performance required to support new and emerging applications.


Click the buttons below to share this blog post!

Return to the main Blog page




Leave a comment


Also in One Stop Systems Blog

Tackling the Thermal Challenge of 600W+ Devices in High-Density Computing Systems
Tackling the Thermal Challenge of 600W+ Devices in High-Density Computing Systems

June 02, 2025 8 Comments

 

When the PCI-SIG formally added support for 675W add-in card devices in the PCI Express Card Electromechanical (CEM) specification in August 2023, NVIDIA’s most powerful CEM GPU, the NVIDIA H100 80GB had a maximum power consumption of 350W.  While some devices were starting to push the limits of datacenter thermodynamics – high density systems of many 675W devices seemed like a distant reality.  However, with power constraints uncapped and the need for higher performing GPUs skyrocketing, the industry quickly came out with devices taking full advantage of the new specification capability.  NVIDIA quickly replaced the H100 80GB with the H100 NVL, increasing power density to 400W.  While this small jump was manageable for existing installations, NVIDIA then dove all-in with the H200 NVL released in late 2024 at 600W.  The rapid transition from 350W to 600W has put power and cooling technologies in the spotlight in a race to solve this next generation challenge.

Continue Reading

The Future of Transportation: Will Autonomous Trucks Ever Make the Driver Obsolete?
The Future of Transportation: Will Autonomous Trucks Ever Make the Driver Obsolete?

April 14, 2025 8 Comments

The advent of technology has always brought about significant changes to various industries, and the transportation sector is no exception. Among the most transformative innovations in recent years is the development of autonomous vehicles, particularly trucks. The potential for autonomous trucks to revolutionize freight transport is immense, raising the fundamental question: will these technological advancements make human drivers obsolete? To explore this question, we must consider the current state of autonomous driving technology, the economic implications, and the societal impact of removing human drivers from the equation.

Continue Reading

Advantages and Disadvantages of Implementing AI Inference Nodes on Soldiers
Advantages and Disadvantages of Implementing AI Inference Nodes on Soldiers

January 15, 2025 8 Comments

The integration of artificial intelligence (AI) into military operations has revolutionized battlefield strategies, decision-making, and operational efficiency. Among these advancements, AI inference nodes deployed directly on soldiers represents a cutting-edge innovation. These nodes, compact computational devices, enable real-time AI processing and analytics, empowering soldiers with enhanced situational awareness, decision support, and operational effectiveness. However, such technology also brings challenges, particularly in power management, size, and weight constraints. This blog delves into the advantages and disadvantages of implementing AI inference nodes on soldiers, focusing on these critical aspects.

Continue Reading

You are now leaving the OSS website