This slide gives the example of how to run a model on an i.MX RT crossover MCU. First, the model must be converted into an embedded format like TensorFlow Lite, CMSIS-NN, or Glow. This can be done using PC tools provided by the framework developers. Some models and tool sets will also allow the designer to do optimizations like quantization and pruning to reduce memory requirements and speed up inference time. Then, that converted model can be loaded onto an i.MX RT device and using the inference engine provided by eIQ™ software, the model can run inference on user input directly on the i.MX RT device.