Discover how Arm NN makes it easy to deploy machine learning on power-efficient devices, allowing developers to move NN workloads around an SoC quickly and easily, reducing the need for processor-specific optimization and facilitating software portability.
By enabling translation of existing frameworks – such as TensorFlow and Caffe – Arm NN allows them to run efficiently, without modification, across a variety of Arm Cortex CPUs and Arm Mali GPUs.
Watch this webinar to learn:
- How to get up and running with Arm NN on Linux
- How to use Streamline, Arm’s profiling tool, to analyze application performance