Webinar OnDemand

Project Trillium: Optimizing ML Performance for any Application

As artificial intelligence (AI) expands across more devices, adding the machine learning (ML) capabilities required to support AI advances, also brings other benefits, such as time savings and increased productivity. However, choosing the right solution isn’t always easy: ML processing requirements vary significantly depending on the task, which might include object recognition, face verification, speech recognition, and more. As workloads vary, there is no one-size-fits-all solution.

For developers, the choice can be bewildering. CPUs offer moderate performance with general purpose programmability, for example in the case of speech recognition. GPUs provide faster performance with graphics-intensive applications, such as computer and image recognition. MCUs work best in cost- and power-constrained embedded IoT systems for distributed intelligence; and the Arm ML processor offers the highest performance and efficiency for intensive ML processing.

Whether you’re building complex neural networks or simply looking at AI trends, join experts from Arm’s ML group for key insights into how to navigate the path intelligently.

This webinar tells you:

  • How advances in compute processing power and AI algorithms have pushed applications, training, and inference to edge devices.
  • How to choose the best ML software and hardware combination to address each use case.
  • The features and benefits of Arm’s new Machine Learning (ML) and Object Detection (OD) processors, their applicability for different markets and the options for incorporating them in differentiating SoC designs