Menu

Deploying Artificial Intelligence where the ‘Rubber meets the Road’

February 25, 2022

Deploying Artificial Intelligence where the ‘Rubber meets the Road’

A long-haul truck cruises by you on Interstate 10 in west Texas. You get a brief wave from the driver who seems unfocused on the road ahead.  It’s hot and the road is bumpy. You didn’t realize it but you just encountered an autonomous driving truck in development, testing out the latest version of its Artificial Intelligence algorithms.  The truck is loaded with unseen video cameras, lidar, radar and infer-red sensors.  As it travels along it is seeing its environment and its on-board computers are making the thousands of little decisions that keep it moving safely to its prescribed destination.  The driver is a safety layer that will be removed in future iterations of the design.

The applications loaded in this truck are similar to those in autonomous vehicles in mining, construction, or agriculture or in military applications on or under the sea.  They share the common trait of needing to deploy the most sophisticated Artificial Intelligence but demanding that that capability be delivered in unforgiving harsh environments.  These are environments where your typical computer systems cannot operate.  This is the emerging challenge; how do you deploy the most capable AI platforms where the rubber meets the road.

To read the full article click here.





Also in Articles

Designing Transportable, High-Performance AI Systems for the Rugged Edge
Designing Transportable, High-Performance AI Systems for the Rugged Edge

June 29, 2022

System design requirements are well understood for high-performance artificial intelligence applications destined to reside in enterprise or cloud data centers. Data centers are specifically designed to provide a clean, cool environment with stable and standard power with no need to worry about vibration or shock loads.

Continue Reading

Scalable Inferencing for Autonomous Trucking
Scalable Inferencing for Autonomous Trucking

June 23, 2022

Most AI inferencing requirements are outside the datacenter at the edge where data is being sourced and inferencing queries are being generated. AI inferencing effectiveness is measured by the speed and accuracy with which answers are provided, and, in many applications, real-time response is required.  To meet the objectives of many AI applications, a very large number of inferencing queries need to be serviced simultaneously.  Often many different inferencing models answering different types of queries need to be coordinating in parallel.

Continue Reading

Edge NVMe Storage for Autonomous Vehicles
Edge NVMe Storage for Autonomous Vehicles

February 17, 2022

Adoption of the latest HPC and AI technologies in autonomous vehicles is creating a surge in vehicle real-time data processing capabilities.  The growth in number, speed and resolution of vehicle sensors and related compute performance in autonomous vehicles has in turn led to increased demand for high-capacity, high-throughput storage.  Effective storage within autonomous vehicles must meet three primary criteria: throughput to match the capture and processing rates, a rapid data transfer workflow for offloading the captured data, and a rugged environmental design to operate in any autonomous vehicle conditions.

To read the full article click here.

Continue Reading