By Jaan Mannik, Director of Commercial Sales
The evolution of IT infrastructure spans several decades and is marked by significant advancements in computing technology, networking, storage, and management practices. Data Centers have historically relied on Converged or Hyper-Converged infrastructures when deploying their hardware, which proved limited in flexibility, efficiency, scalability, and support for the Artificial Intelligence / Machine Learning (AI/ML) modern workloads of today. Composable Disaggregated Infrastructure addresses these limitations by increasing resource utilization, and represents a paradigm shift in how IT infrastructure is designed. In this post, we’ll take an in-depth look at how this new technology is being deployed today, and why its revolutionizing the way organizations build and operate their IT environments.
In a Converged Infrastructure (CI), all the hardware resources are pre-configured and integrated into a single server, including processing, compute, storage, and networking tools. Multiple servers are racked together and connected over a common network backbone such as Ethernet, Fibre Channel, or InfiniBand. Converged Infrastructure is designed to simplify deployment by minimizing compatibility issues between systems, reduce complexity, and improve performance. For companies looking to scale, they simply deploy more pre-configured bundles of hardware and connect them to their network. While easier to manage, it often leads to scalability challenges, resulting in over-provisioning or underutilization of resources. Flexibility is also limited since deployments are often optimized for specific workloads, making it challenging to support changing requirements and introduce new technologies or components.
Hyper-Converged Infrastructure (HCI) addresses some of the limitations of a Converged Infrastructure by taking a software-based approach that integrates the processing, compute, storage, and networking resources into a single, software-defined, virtualized environment. Virtual servers are managed by a single hypervisor which provides simplified management, better scalability, and flexibility in modern data centers. Although a more adaptable IT Infrastructure approach, virtual machines (VMs) introduce performance overhead, latency, scaling granularity of a particular resource, and data locality concerns.
Enter Composable Disaggregated Infrastructure (CDI); a modern approach to IT Infrastructure that has transformed data center operations by providing a more agile, efficient, and flexible infrastructure foundation that can better support the evolving needs of modern businesses. Processing, compute, storage, and networking resources are disaggregated from their physical locations and put into a resource pool, which is managed by an API and dynamically allocated to a given workload while the other resources in the pool are being used elsewhere. For example, some workloads may require more GPUs for compute-heavy jobs or more storage for data recording purposes and those decisions can be made on the fly with software-defined intelligence which allows for optimal performance. CDI eliminates the need for workload-specific environments by offering a fluid set of resources that can be composed to meet the unique needs of any application. Resource utilization becomes optimized based on application requirements. Scalability becomes dynamic by being able to pick and choose resources in response to fluctuating workloads. Management is simplified since you don’t need to physically provision hardware. Downtime is also reduced because you can perform maintenance on unused resources without disrupting running applications.
Given all the advantages of composable infrastructure, many technology companies are competing to find an edge. Key differentiators between these technology companies often include bare metal servers, GPU/FGPA expansion accelerators, flash storage arrays, and networking interfaces, which are all interconnected by a physical top-of-rack PCI Express (PCIe) switch fabric and managed by their own special blend of software. PCIe switch fabrics are extremely high performance, supporting 256GB/s sustained throughput at Gen5, and provide much lower latency than other protocols such as Ethernet or InfiniBand. CDI companies take advantage of PCIe by leveraging expansion systems to support their GPU/FGPA/SSD/NIC resources and directly connect them to the top-of-rack PCIe switch. In doing so, all PCIe devices in the remote chassis appear to the host as directly connected PCIe peripherals on the PCIe switch fabric. A great example of the latest generation of PCIe 5.0 expansion system being leveraged by some of these technology companies today is the One Stop Systems' 4U Professional compute accelerator, which combines the power of the latest PCIe Gen5 add-in cards with an optimized, feature-rich, design for the most demanding HPC applications.
Overall, composable infrastructure represents a significant evolution in data center architecture, offering greater flexibility, efficiency, and scalability compared to traditional monolithic infrastructure models. By disaggregating resources into modular building blocks and providing a software-defined control plane for resource management, CDI enables organizations to adapt more quickly to changing business needs and drive innovation in their IT infrastructure.
Click the buttons below to share this blog post!
mo4bx5
hx40gc
871avt
y6pjc4
9n97bi
xcmlrf
tghah1
lgswfb
5u0sfj
oeffe5
tpwdot
cyeuya
j0u52d
j0u52d
wx4lb3
woft9e
woft9e
y3xtm4
y3xtm4
uqk3an
wuklzz
ssjram
6gsl73
b4x1fp
b4x1fp
mjm1yt
The rugged edge computing landscape is becoming increasingly complex with new generations of technologies, such as the latest AI focused GPUs, releasing annually rather than every 2-3 years. Whether the end application is commercial or defense, rugged edge servers must not only deliver cutting-edge compute performance but also withstand extreme environmental conditions.
When the PCI-SIG formally added support for 675W add-in card devices in the PCI Express Card Electromechanical (CEM) specification in August 2023, NVIDIA’s most powerful CEM GPU, the NVIDIA H100 80GB had a maximum power consumption of 350W. While some devices were starting to push the limits of datacenter thermodynamics – high density systems of many 675W devices seemed like a distant reality. However, with power constraints uncapped and the need for higher performing GPUs skyrocketing, the industry quickly came out with devices taking full advantage of the new specification capability. NVIDIA quickly replaced the H100 80GB with the H100 NVL, increasing power density to 400W. While this small jump was manageable for existing installations, NVIDIA then dove all-in with the H200 NVL released in late 2024 at 600W. The rapid transition from 350W to 600W has put power and cooling technologies in the spotlight in a race to solve this next generation challenge.
The advent of technology has always brought about significant changes to various industries, and the transportation sector is no exception. Among the most transformative innovations in recent years is the development of autonomous vehicles, particularly trucks. The potential for autonomous trucks to revolutionize freight transport is immense, raising the fundamental question: will these technological advancements make human drivers obsolete? To explore this question, we must consider the current state of autonomous driving technology, the economic implications, and the societal impact of removing human drivers from the equation.
🔕 🔐 Security Pending - 0.7 Bitcoin transfer delayed. Resolve now >> https://graph.org/ACQUIRE-DIGITAL-CURRENCY-07-23?hs=65791e4e39631001af7c74ba373c9dd4& 🔕
September 03, 2025
2fa8rx