Menu

'AI on the Fly': Moving AI Compute and Storage to the Data Source

May 06, 2019

'AI on the Fly': Moving AI Compute and Storage to the Data Source

Tim Miller of One Stop Systems highlights a new approach - AI on the Fly - where specialized high-performance accelerated computing resources for deep learning training move to the field near the data source. One Stop Systems asserts moving AI compute to the data is another important step in realizing the full potential of AI.

To read the article, click here.

The impact of artificial intelligence is starting to be realized across a broad spectrum of industries. Typically, deep learning (DL) training is a centralized datacenter process and inferencing occurs in the field. To build an AI system, data is collected, run through data scientist training models based on deep learning (DL) frameworks - on the fastest accelerated computers in the world - with the output sent to the field for an "AI at the Edge" system to inference from this model in day-to-day decision making.





Also in Articles

How the Scarcity of Truckers Hurts Manufacturing (And How to Fix the Problem)
How the Scarcity of Truckers Hurts Manufacturing (And How to Fix the Problem)

August 18, 2022

According to the American Trucking Associations (ATA), at current trends the driver shortage could surpass 160,000 by 2030. ATA estimates that, in the next decade, the industry will have to recruit nearly a million new drivers into the industry to replace drivers exiting the field due to retirements, driver burn-out, compensation and poor benefits. These are the challenges facing transportation executives in securing a robust driver pool.

However, the challenge of driver shortages does not end with the trucking industry. Rather, the scarcity of drivers directly affects the larger manufacturing sector.

To read the full article click here.

Continue Reading

Designing Transportable, High-Performance AI Systems for the Rugged Edge
Designing Transportable, High-Performance AI Systems for the Rugged Edge

June 29, 2022

System design requirements are well understood for high-performance artificial intelligence applications destined to reside in enterprise or cloud data centers. Data centers are specifically designed to provide a clean, cool environment with stable and standard power with no need to worry about vibration or shock loads.

Continue Reading

Scalable Inferencing for Autonomous Trucking
Scalable Inferencing for Autonomous Trucking

June 23, 2022

Most AI inferencing requirements are outside the datacenter at the edge where data is being sourced and inferencing queries are being generated. AI inferencing effectiveness is measured by the speed and accuracy with which answers are provided, and, in many applications, real-time response is required.  To meet the objectives of many AI applications, a very large number of inferencing queries need to be serviced simultaneously.  Often many different inferencing models answering different types of queries need to be coordinating in parallel.

Continue Reading

You are now leaving the OSS website