Machine learning (ML) has already proved its value in performing tough computational tasks that drive a wide variety of artificial intelligence applications.

From voice assists to facial recognition, and from smartphones to industrial machinery sensors, ML is becoming increasingly powerful and pervasive. As ML moves out of the cloud and the data center and into a growing variety of edge devices and applications, designers of ML-enabled devices face critical decisions regarding the right hardware and software to use.


The CPU is already the default processor for AI computing, whether handling the full load or partnering with a co-processor, such as a GPU or an NPU for specific tasks.

This must-read guide explores key considerations when choosing the right processor IP mix for ML, ensuring an optimal balance of ML system performance, cost, and product design.