Most AI inferencing requirements are outside the datacenter at the edge where data is being sourced and inferencing queries are being generated. AI inferencing effectiveness is measured by the speed and accuracy with which answers are provided, and, in many applications, real-time response is required. To meet the objectives of many AI applications, a very large number of inferencing queries need to be serviced simultaneously. Often many different inferencing models answering different types of queries need to be coordinating in parallel.
Autonomous trucking is a prime example. To achieve AI Level 4 (no driver) in autonomous trucks, powerful AI inference hardware supporting many different inferencing engines operating and coordinating simultaneously is required. A long-haul truck in Boston with cargo destined for the west coast will spend two days autonomously operating across the country, saving its owner cost and time. When driving all day and night, it will encounter a range of variable weather and traffic conditions, as well as some unexpected events like animal crossings, accidents, construction detours or debris in the road.
To read the full article click here.
According to the American Trucking Associations (ATA), at current trends the driver shortage could surpass 160,000 by 2030. ATA estimates that, in the next decade, the industry will have to recruit nearly a million new drivers into the industry to replace drivers exiting the field due to retirements, driver burn-out, compensation and poor benefits. These are the challenges facing transportation executives in securing a robust driver pool.
However, the challenge of driver shortages does not end with the trucking industry. Rather, the scarcity of drivers directly affects the larger manufacturing sector.
To read the full article click here.