Menu

The Future of Edge Computing: The ECU

September 12, 2023

The Future of GPU Edge Computing: The ECU

Jaan Mannik – Director of Commercial Sales

In my last blog post, What is the Workhorse Advancing HPC at the Edge?, I highlighted how enterprise applications requiring the highest end compute for their AI workloads at the Edge are leveraging data-center grade NVIDIA GPUs to get even greater performance. Processing and storing data closer to where the action takes place means a decision can be made more quickly, producing reduced latency, improved security, greater reliability, and much higher performance. In this blog, I’ll be covering the transition from using big power-hungry GPUs to smaller form factor electronic control units, better known as ECUs, at the very edge.

ECU

GPUs have traditionally been the workhorse for training and retraining AI models, thanks to their massive parallel architecture designed for general purpose computingTraining an AI model typically involves processing vast amounts of data through complex mathematical operations to adjust the model parameters, essentially teaching it to make more accurate predictions or classifications over timeEnterprise class NVIDIA GPUs like the Gen4 A100 or new Gen5 H100 are equipped with specialized libraries like CUDA and cuDNN that optimize AI/ML frameworks for GPU acceleration, ensuring that AI developers can take advantage of the full potential of their GPUs

Retraining AI models is also extremely important for creating better model accuracy and relevance over time. When new data becomes available, retraining is essential to adapt the model to changing circumstances. Retraining AI models typically requires less computational resources than the initial training because the new datasets are smaller and the model is more optimized. This kind of logic can be applied to real life applications as well. For example, think of a baseball pitcher in high school who throws a great fastball (I’m reliving my glory days for a moment). To teach them to throw a curveball, the coach will teach them new way to grip the baseball, so the ball breaks across the plate. The pitcher doesn’t need to be taught how to push off the mound and release the ball toward the catcher because he’s already learned how to do that. It’s a new, small dataset applied to an existing model (or pitcher), which is retrained to throw fastballs AND curveballs. 

NVIDIA

Applying the above logic to seasoned AI models which are ready for production environments at the edge is why the electronic control unit (ECU) is so important. There is still a need for some real-time processing in addition to efficient power consumption, ruggedization, size, weight, and cooling due to the non-traditional environments of edge computing applications. Those big power-hungry NVIDIA© A100 & H100 GPUs used initially to train models in a datacenter are no longer suitable, and can be replaced by specialized ECUs like NVIDIA Orin. NVIDIA Orin modules, such as JetsonTM AGX Orin module, are highly integrated and powerful system-on-chip (SoC) modules specifically designed to address the limitations of non-traditional edge environments mentioned above. Each ECU combines multiple Arm CPU cores, NVIDIA’s custom-designed GPU cores, dedicated hardware accelerators for AI/ML learning tasks, and a variety of edge relevant I/O interfaces into a single compact power-efficient package, making it an idea solution for edge devices with constrained resources. Connecting multiple ECUs together can increase performance and also introduce redundancy, which is extremely important for edge applications like autonomous driving, robotics, smart cities, etc. 

 

ECU


In conclusion, the shift from GPUs to ECUs like NVIDIA Orin in edge computing reflects the evolving demands of modern applications. The move towards power efficient, compact, and integrated solutions as been driven by the need for real-time processing, low latency, and AI capabilities at the very edge. As technology continues to advance, ECUs are likely to play an increasingly pivotal role in shaping the future of edge computing. 

Click the buttons below to share this blog post!

Return to the main Blog page




Leave a comment

Comments will be approved before showing up.


Also in One Stop Systems Blog

Composable Infrastructure:  Dynamically Changing IT Infrastructure
Composable Infrastructure: Dynamically Changing IT Infrastructure

May 01, 2024

The evolution of IT infrastructure spans several decades and is marked by significant advancements in computing technology, networking, storage, and management practices. Data Centers have historically relied on Converged or Hyper-Converged infrastructures when deploying their hardware which proved to limited in flexibility, efficiency, scalability, and support for the Artificial Intelligence / Machine Learning (AI/ML) modern workloads of today. 

Continue Reading

Edge Computing
The Four Types of Edge Computing

April 17, 2024

“Edge Computing” is a term which has been widely adopted by the tech sector. Dominant leaders in accelerated computing have designated “Edge” as one of their fastest-growing segments, with FY24 revenue projected to be nearly $100 billion. The boom in the market for Edge Computing has become so significant that it is increasingly common to see companies create their own edge-related spinoff terms such as ‘Rugged Edge’, ‘Edge AI’, ‘Extreme Edge’, and a whole slew of other new buzzwords. 

Continue Reading

Datalogging in Autonomous Military
Unveiling the Strategic Edge: Datalogging in Autonomous Military Vehicles

March 11, 2024

The landscape of modern warfare is undergoing a profound transformation with the integration of cutting-edge technologies, and at the forefront of this evolution are autonomous military vehicles. Datalogging, a seemingly inconspicuous yet indispensable technology, plays a pivotal role in shaping the capabilities and effectiveness of these autonomous marvels. In this blog post, we delve into the critical role of datalogging in autonomous military vehicles and its impact on the future of defense strategies.

Continue Reading

You are now leaving the OSS website