Menu

Past and Future of AI Compute Technology

March 29, 2022

Past and Future of AI Compute Technology

By Braden Cooper, Product Marketing Manager

When you take a step back and look at today’s high-performance computing and AI technological ecosystem, it is a field dominated by the power of GPUs in compute acceleration.  For the layperson, the term “Graphical Processing Unit” would not stand out as the core component in processing massive datasets at unprecedented speeds – with no graphics involved.  The transition from CPU to GPU core computing in HPC and AI/ML applications, and the further development of GPU technologies since, is the fundamental driving force behind today’s most cutting-edge applications.  Modern AI spans practically all industries including piloting the most sophisticated autonomous vehicles, detecting previously unseeable fraud patterns in the financial industry, and increasing productivity through automation in mining, oil and gas, and farming. 

Moore’s Law for CPUs
In 1965, the infamous Moore’s Law originated, which theorized that the maximum possible number of transistors in an integrated circuit (and therefore theoretical compute power) will double approximately every two years.  This theory served not only as a prediction of the future of computing at the time, but was a guiding principle in the chip-level design in the semiconductor business for the decades to follow. Chip manufacturers built roadmaps around the principle that computing technologies would double every other year – epitomized by Intel’s “Tick-Tock” model, in which they would alternate design cycles of manufacturing technology improvements, then microarchitecture improvements in their CPU designs in order to maintain trend.  While Moore’s law was followed closely for decades well into the 21st century, the reality of exponential scaling became more and more unfeasible.  Around 2010, semiconductor density fell below Moore’s law, forcing industry leaders to re-think the future of compute technologies.

The Power of GPUs in AI Computing
Near the same time period, innovators in computing began realizing the value of using GPUs as compute acceleration devices.  While primarily designed as a device to process graphics, GPUs could also be used for general-purpose computing operations in certain applications.  While CPU design allowed for complex computation sets and advanced features, GPUs contain significantly higher core counts, which excel at high throughput parallel processing of simple calculations. Applications which required large datasets to be run through consistent sets of calculations could be processed significantly faster on a GPU than a CPU. NVIDIA quickly capitalized on this capability by introducing their Tesla family of products in 2007.  The Tesla family of GPUs were optimized for parallel compute acceleration and often did not have an option for video output which, prior to that point in time, had been the main use of GPUs.  Alongside their Tesla line of compute acceleration GPUs, NVIDIA also developed their own application programming interface (API) named CUDA, which allows software developers to take advantage of the compute capabilities of the GPUs more easily.

As CPU chip manufacturers continued to see slower growth in chip density and delays in manufacturing process improvements, GPU technology entered a period of rapid growth in performance.  At the GPU Technology Conference (GTC) in 2018, NVIDIA CEO Jensen Huang introduced his own metric for compute technology growth, self-coining the term “Huang’s Law” – continuing the spirit of Moore’s Law with focus on the silicon chips within GPUs more than doubling every two years. The success of GPUs in filling this new technology role is apparent in a variety of ways.  NVIDIA saw an approximate 3200% growth in market cap from 2010-2020, entering the top 10 US companies by market cap. Artificial Intelligence, previously a work of fiction, took on a real persona and became critical in most every industry, due to the compute capabilities provided by GPUs.  Today, GPUs are the core component used to run all autonomous vehicles, solve the most complex scientific challenges, and read medical scan data for results beyond human visual acuity.

AI Deep Learning

Looking Forward
While the scaling of GPU density has cooled somewhat, the adoption of GPUs for the purpose of AI applications is at an all-time high.  As the code-bases which drive compute acceleration have matured, implementing AI has become more feasible in most industries.  Examples such as farming, mining, drilling, and freighting are all undergoing technology overhauls to replace outdated CPU-focused or manual systems with compact, architecturally optimized GPU-focused AI platforms, which improve the efficiency, accuracy, and consistency of their respective operations. The innovators primary focus has transitioned from purely compute power to how to best architect systems, both in hardware and software, for efficient AI workflows. This raises new questions about how massive datasets can be captured, stored, communicated, processed, and the data ultimately used to generate actionable intelligence. These questions in-turn are leading to new advances in FPGAs, NICs, SSDs, and other initialism technologies critical to the AI workflow. While GPUs are still the core of the process, the role of the other components in building a balanced and efficient data path is growing.

Looking forward, the global technology community has many years of development to come around AI. As the size of data continues to scale with new advances in sensor and data recording, so will the compute and networking platforms need to grow. New advances in the software side of AI allow for more efficient code, but are paired with newer and more powerful feature sets which take advantage of the higher compute capacities available. AI applications will become more and more common across industries – introducing a new wave of efficiencies alongside new challenges to overcome in the next generation of system designs. Technologies are also being rapidly adopted in Department of Defense (DoD) applications as seen in the 2022 National Defense Authorization Act (NDAA) text, which includes several sections of requirements regarding the development of metrics, research, and identification of technologies in the AI ecosystem. Military unmanned vehicles and threat detection systems continue to grow alongside advances in the commercial sector.  Ultimately, this new growth is creating, and will continue to create, incentives for companies to innovate and overcome obstacles in developing AI Compute systems.

AI Compute Technology has been in a period of rapid growth for over a decade and shows no sign of slowing down.  Now that GPU applications, both hardware and software, have seen proven success and matured across a wide variety of industries, the adoption of AI has never been easier.  The focus of today’s computing research and development are centered around system architectures and optimizing data paths throughout the application, from data capture to actionable intelligence. Whether in autonomous vehicles, finance, or manufacturing automation, AI Computing is a nexus technology in today’s world, and will continue to expand its role in most every industry into the future.

Click the buttons below to share this blog post! 

Return to the main Blog page




Leave a comment

Comments will be approved before showing up.


Also in One Stop Systems Blog

Composable Infrastructure:  Dynamically Changing IT Infrastructure
Composable Infrastructure: Dynamically Changing IT Infrastructure

May 01, 2024

The evolution of IT infrastructure spans several decades and is marked by significant advancements in computing technology, networking, storage, and management practices. Data Centers have historically relied on Converged or Hyper-Converged infrastructures when deploying their hardware which proved to limited in flexibility, efficiency, scalability, and support for the Artificial Intelligence / Machine Learning (AI/ML) modern workloads of today. 

Continue Reading

Edge Computing
The Four Types of Edge Computing

April 17, 2024

“Edge Computing” is a term which has been widely adopted by the tech sector. Dominant leaders in accelerated computing have designated “Edge” as one of their fastest-growing segments, with FY24 revenue projected to be nearly $100 billion. The boom in the market for Edge Computing has become so significant that it is increasingly common to see companies create their own edge-related spinoff terms such as ‘Rugged Edge’, ‘Edge AI’, ‘Extreme Edge’, and a whole slew of other new buzzwords. 

Continue Reading

Datalogging in Autonomous Military
Unveiling the Strategic Edge: Datalogging in Autonomous Military Vehicles

March 11, 2024

The landscape of modern warfare is undergoing a profound transformation with the integration of cutting-edge technologies, and at the forefront of this evolution are autonomous military vehicles. Datalogging, a seemingly inconspicuous yet indispensable technology, plays a pivotal role in shaping the capabilities and effectiveness of these autonomous marvels. In this blog post, we delve into the critical role of datalogging in autonomous military vehicles and its impact on the future of defense strategies.

Continue Reading

You are now leaving the OSS website