Most of us are familiar with or have at least heard the term ‘Edge’ or ‘Edge Computing’ when discussing new high-performance computing (HPC) technologies around the Internet of Things (IoT), Cloud Computing, Autonomous Vehicles, etc. How many of us can explain the different levels of performance between PCIe and 5G, in relation to edge computing? Moreover, why is it important when discussing (HPC) at the edge? In this post, we’ll take an in-depth look at the increasing need for greater performance at the edge and what HPC applications benefit from PCIe vs. 5G.
PCI Express (PCIe) is a high-speed serial computer bus standard found on virtually every motherboard today, and connects peripheral components like GPUs, SSDs, networking, and other various I/O devices, to the CPU running in a computer. Even things like your mouse, monitor, and keyboard leverage this high-performance bus standard by simply converting the PCIe protocol to something else such as USB, HDMI, ethernet, etc. Simply put, if it’s connected to your computer, it’s interfacing with your computer’s brain (CPU) over PCIe.
Now to be fair, 5G wireless adapters like the one shown here, technically use PCIe (converted from USB in this example) to function properly in your computer in the same manner your mouse does, so to compare the two may seem a bit questionable. However, both of these technologies play a major role in the evolution of edge computing as we know it today, as well as where we are headed tomorrow.
I often hear people talking about 5G technology as if it’s going change the world by creating a high-performance, robust, global wireless network to better enable cell phones, IoT devices, edge computers, and even cars to one day drive autonomously. These devices are connected over 5G to a centralized cloud/datacenter where most the heavy lifting (compute and storage functions) is done. This technology works well for dashcams, sensors, security cameras, and infotainment systems, because they can rely on the performance of 5G. Theoretical performance of 5G is said to be up to 10Gb/s but the actual performance is closer to 1Gb/s. That’s right, this blazing fast 5G technology we’ve all been waiting for is almost as fast as your old 1Gigabit wired ethernet connection you complain about at the office.
In steps PCIe. While some companies like OSS have already begun shipping PCIe Gen 5 product, the current PCIe Gen 4 is widely adopted today and used in enterprise class HPC datacenter environments supporting top of the line GPU compute clusters and NVMe flash storage arrays. Since a standard 16-lane PCIe Gen 4 data path can support 256Gb/s, it’s considered one of the highest performing, lowest latency and most secure connections on the planet. For HPC edge applications which require ‘datacenter level’ types of performance such as autonomous driving, medical imaging, media & entertainment, battlefield mobility, etc., they want to bring the power of the datacenter to the edge. This can be achieved by utilizing the same enterprise class CPUs, GPUs, SSDs and optimizing them for use in non-traditional environments closer to the edge. Data center performance at the very edge offers performance without compromise where action must take place NOW.
Edge computing is seeing an increased deployment of heterogeneous HPC systems with PCIe interconnect, due to its performance advantages around data processing for AI/ML workloads, high-speed NVMe storage, ingest, and inferencing. This is quite a paradigm shift from the more traditional approach of those AI tasks being completed in the Cloud. More data will be processed, stored, and analyzed at the edge to help deliver better performance, lower latency, improved reliability, security, and privacy. In 2018, only 10% of generated data for HPC applications was created and processed on the edge. Recent edge computing market trends indicate that by 2025, data processed at the edge is expected to grow to 75%. As this market continues to grow, so does the need for greater performance. 5G is a great advancement in wireless technology, which can be leveraged by smaller systems and IoT devices; however, don’t expect it to steer your car for you anytime soon.
The beginning of the year at a public company is inundated with a myriad of tasks which require a significant amount of time. The first task is writing annual performance reviews for each employee. This includes a review and assessment of achievement of goals from the previous year, and the establishing of goals for the new year. These are required to be written in SMART goal format; Specific, Measurable, Achievable, Relevant and Time based.
The most powerful artificial intelligence computing hardware is designed to thrive in a datacenter environment where there is uncapped clean power, near limitless cooling capacity, and a vibration-free environment. The growth of AI use cases in vehicles including automated crop management, autonomous long-haul freight, and military ISR aircraft necessitates the use of datacenter-oriented hardware in vehicles – particularly for initial developments while more customized size, weight, and power (SWaP) optimized embedded platforms are developed.
In this video, Jaan Mannik, Director of Commercial Sales at OSS, does a quick walkthrough of Centauri Storage Expansion. Centauri offers rugged high-speed storage in a compact chassis. Built as a modular storage expansion to the OSS 3U SDS, Centauri can store up to 256 TB of NVMe storage in its 8-drive canister. These canisters allow for tool-less bulk or individual drive removal and can be hot-swapped for ease of use in fast-paced environments. The system is compatible with 2.5" NVMe drives, and its PCIe Gen4 hardware facilitates high-speed storage throughput.