Webinar OnDemand

Project Trillium: Optimizing ML Performance for any Application

Adding machine learning capabilities to any new product brings benefits such as time savings, downtime prevention and increased productivity. However, choosing the right solution for the task isn’t always easy: machine learning (ML) processing requirements vary significantly according to workload and there is no one-size-fits-all solution.

From CPUs, offering moderate performance with general purpose programmability to GPUs for faster performance with graphics-intensive applications, MCUs for cost- and power-constrained embedded IoT systems and the Arm ML processor for the highest performance and efficiency for intensive ML processing, the choice can be bewildering.

Join experts from Arm’s machine learning group to gain insights that will help you navigate the path intelligently.

During this webinar you will learn:

  • How advances in compute processing power and AI algorithms have pushed applications, training, and inference to edge devices
  • How to choose the best ML software and hardware combination to address each use case
  • Features and benefits of Arm’s new Machine Learning (ML) and Object Detection (OD) processors, their applicability for different markets and the options for incorporating them in differentiating SoC designs