Menu

Scale-out I/O Solutions in a PCIe Gen5 World

February 21, 2023 1 Comment

Scale-out I/O Solutions in a PCIe Gen5 World

By Jeff Sigua, Product Manager

As the world is on the brink of a tech refresh, scale-out solutions will get a significant boost in both TDP and PCIe transfer speeds. Our CPUs jump from 250w up to 400w in Gen5. Our PCIe lane architecture increases from ~32 GB/s to ~64 GB/s - in each direction. The extra watts will push the boundary of I/O solutions and speeds. Scale out I/O is expanding capacity of new hardware resources instead of increased capacity of old hardware. 

Over the past two decades, One Stop Systems has been specializing in cable PCIe expansion using passive PCB routing expertise, or switch architecture, to drive the tech roadmaps into the future. Recently, we have developed PCIe5 Host Interface Boards (HIB) to work up to 2 meters over copper cables and connecting to the latest Gen5 PCIe expansion backplanes or rack mount enclosures. 

As next Gen motherboards become available, it seems the common theme among them is fewer native x16 PCIe5 slots coming from the CPU chips. Those extra PCIe5 lanes from the CPU have been routed to internal network or storage options built into the PCIe5 motherboards and connectors. This highlights the importance of using PCIe5 over cable, to allow for more scaled-out array of I/O cards, such as NVIDIA’s H100 GPU and other high data capture card options in development from multiple third-party companies. At the hub of the scale-out will be One Stop Systems' new Gen5 Short Depth Server (SDS). This server appliance can host up to 16x NVMe U.2 storage with removeable drive canister feature + GPU node + PCIe5 cable expansion to our sister product, the 4U Pro or our Centauri storage expansion solution.

Scale-out Solutions

We’re seeing full scale-out I/O rack deployments running off One Stop Systems' SDS or EOS server. The SDS or EOS can handle up to 128 end point devices, with the help of custom motherboard BIOS optimized for PCIe enumeration. As we move into PCIe Gen5, the rate of the sensor and data ingestion doubles in speeds talking back to the host server to the scaled out I/O devices. These speeds become a critical path with AI on the Edge and/or machine learning software packet sizes. NVIDIA talks about the increased tensor cores AI benefits here, as it also gets a power bump to 350w TDP per PCIe5 GPU. 

Scale-out I/O with Gen5 opens the tech boundaries for our future as a society by doubling its performance. We should see a rapid growth in autonomous cars/trucks/machines and in our infrastructure around use with smart city. Increasing the edge technologies of radar, lidar, 4k/8k camera capture systems with enough processing speeds to get near real time results using machine learning.  

Click the buttons below to share this blog post!

Return to the main Blog page




1 Response

Chana Small
Chana Small

March 03, 2023

Dear onestopsystems.com admin, Your posts are always well-supported by facts and figures.

Leave a comment


Also in One Stop Systems Blog

Tackling the Thermal Challenge of 600W+ Devices in High-Density Computing Systems
Tackling the Thermal Challenge of 600W+ Devices in High-Density Computing Systems

June 02, 2025 8 Comments

 

When the PCI-SIG formally added support for 675W add-in card devices in the PCI Express Card Electromechanical (CEM) specification in August 2023, NVIDIA’s most powerful CEM GPU, the NVIDIA H100 80GB had a maximum power consumption of 350W.  While some devices were starting to push the limits of datacenter thermodynamics – high density systems of many 675W devices seemed like a distant reality.  However, with power constraints uncapped and the need for higher performing GPUs skyrocketing, the industry quickly came out with devices taking full advantage of the new specification capability.  NVIDIA quickly replaced the H100 80GB with the H100 NVL, increasing power density to 400W.  While this small jump was manageable for existing installations, NVIDIA then dove all-in with the H200 NVL released in late 2024 at 600W.  The rapid transition from 350W to 600W has put power and cooling technologies in the spotlight in a race to solve this next generation challenge.

Continue Reading

The Future of Transportation: Will Autonomous Trucks Ever Make the Driver Obsolete?
The Future of Transportation: Will Autonomous Trucks Ever Make the Driver Obsolete?

April 14, 2025 8 Comments

The advent of technology has always brought about significant changes to various industries, and the transportation sector is no exception. Among the most transformative innovations in recent years is the development of autonomous vehicles, particularly trucks. The potential for autonomous trucks to revolutionize freight transport is immense, raising the fundamental question: will these technological advancements make human drivers obsolete? To explore this question, we must consider the current state of autonomous driving technology, the economic implications, and the societal impact of removing human drivers from the equation.

Continue Reading

Advantages and Disadvantages of Implementing AI Inference Nodes on Soldiers
Advantages and Disadvantages of Implementing AI Inference Nodes on Soldiers

January 15, 2025 8 Comments

The integration of artificial intelligence (AI) into military operations has revolutionized battlefield strategies, decision-making, and operational efficiency. Among these advancements, AI inference nodes deployed directly on soldiers represents a cutting-edge innovation. These nodes, compact computational devices, enable real-time AI processing and analytics, empowering soldiers with enhanced situational awareness, decision support, and operational effectiveness. However, such technology also brings challenges, particularly in power management, size, and weight constraints. This blog delves into the advantages and disadvantages of implementing AI inference nodes on soldiers, focusing on these critical aspects.

Continue Reading

You are now leaving the OSS website