By Tom Fries, Government/Defense Sales Manager
Generative, Inferencing and Natural Language Processing (NLP) AI applications have all seen explosive growth (NVIDIA market cap growth 3200%) and rapid proliferation in the last two years across commercial market segments such as Automotive, Aerospace, Manufacturing, Transportation, Banking, and Entertainment. The commercial consumption of AI has quite directly driven the need for High Performance Compute (HPC) solutions, and, specifically HPC solutions that operate in some of the most extreme environments on earth. By comparison, DOD compute systems for air/ground platforms, while significantly ruggedized for austere environments, lack the compute power to employ such technological advancements in AI…or worse, lack the capability to counter AI threats employed by adversaries using commercially available SW/HW.
Currently, the DOD, with each branch having its own AI initiatives, is focusing on requirements for a common hardware interface solution (e.g. VPX, CMFF, OMS) providing a mission module approach for integrating or updating C5ISR capabilities on Air/Ground platforms. The common interface framework, while technically feasible for integrating multiple single use “black boxes” into a single chassis with a common interface, is unable to support the nanoseconds of compute speed and petaflops of data transfer needed to run AI applications. Couple the lack of processing speed/power with the current operational threat’s capabilities to extend the distance of the kill chain, and you can see the dangerous technological gap that exists in most of the DOD’s Air, Ground, Surface and Subsurface platforms.

The Power of GPUs in AI Computing
The transition from CPU to GPU core computing in HPC and AI/ML applications is the fundamental driving force behind today’s most cutting-edge applications. “AI Transportable” systems achieve extraordinary compute performance by use of multiple GPUs and new generation switched fabrics, such as PCIe Gen5 and NVIDIA’s NVLink. Today, these elements do not map well into the OpenVPX standard. To preserve the significant government investment in OpenVPX products, a hybrid solution is needed to gain the performance benefits of GPUs (e.g. NVIDIA H100) to realize the operational capabilities needed to employ and counter AI technologies.
At the heart of an AI Transportable system is a PCIe Gen 5 switched fabric that provides the capacity to support an NVIDIA HGX H100 4 or 8-GPU backplane. Each of the GPUs feature an external 16-lane PCIe connection, as well as meshed private NVLink connection to the other GPUs. A complex and versatile PCIe switched fabric and related management software allow dynamic or fixed lane routing between GPUs, hosts, memory, and I/O according to application demands. To achieve the SOSA-compliant requirement in a VPX format, a unique PCIe Gen 5 Host Bus Adapter expansion technology is utilized.
Looking Forward
Through OpenVPX extensions to AI Transportables, new levels of low-latency sensor acquisition are possible. Direct access to memory in the GPUs without transit through host memory is possible, unlocking new levels of sensor bandwidth with reduced latency. The architecture shifts the role of the host processor to the GPUs which offer greater computing power and the combined enhanced performance required to support new and emerging applications.
Click the buttons below to share this blog post!
The character of modern warfare is being reshaped by data. Sensors, autonomy, electronic warfare, and AI-driven decision systems are now decisive advantages, but only if compute power can be deployed fast enough and close enough to the fight. This reality sits at the center of recent guidance from the Trump administration and Secretary of War Pete Hegseth, who has repeatedly emphasized that “speed wins; speed dominates” and that advanced compute must move “from the data center to the battlefield.”
OSS specializes in taking the latest commercial GPU, FPGA, NIC, and NVMe technologies, the same acceleration platforms driving hyperscale data centers, and delivering them in rugged, deployable systems purpose-built for U.S. military platforms. At a moment when the Department of War is prioritizing speed, adaptability, and commercial technology insertion, OSS sits at the intersection of performance, ruggedization, and rapid deployment.
Maritime dominance has long been a foundation of U.S. national security and allied stability. Control of the seas enables freedom of navigation, power projection, deterrence, and protection of global trade routes. As the maritime battlespace becomes increasingly contested, congested, and data-driven, dominance is no longer defined solely by the number of ships or missiles, but by the ability to sense, decide, and act faster than adversaries. Rugged High Performance Edge Compute (HPeC) solutions have become a decisive enabler of this advantage.
At the same time, senior Department of War leadership—including directives from the Secretary of War—has made clear that maintaining superiority requires rapid integration of advanced commercial technology into military platforms at the speed of need. Traditional acquisition timelines measured in years are no longer compatible with the pace of technological change or modern threats. Rugged HPeC solutions from One Stop Systems (OSS) directly addresses this challenge.
Initial design and prototype order valued at approximately $1.2 million
Integration of OSS hardware into prime contractor system further validates OSS capabilities for next-generation 360-degree vision and sensor processing solutions
ESCONDIDO, Calif., Jan. 07, 2026 (GLOBE NEWSWIRE) -- One Stop Systems, Inc. (OSS or the Company) (Nasdaq: OSS), a leader in rugged Enterprise Class compute for artificial intelligence (AI), machine learning (ML) and sensor processing at the edge, today announced it has received an approximately $1.2 million pre-production order from a new U.S. defense prime contractor for the design, development, and delivery of ruggedized integrated compute and visualization systems for U.S. Army combat vehicles.