Methodology to Adapt Neural Network on Constrained Device at Topology level
Résumé
Artificial Intelligence is now ubiquitous as nearly every application domain has found some use for it. The high computational complexity involved in its deployment has led to strong research activity in optimizing its integration in embedded systems. Research works on efficient implementations of CNNs on resource-constrained devices (eg. CPU, FPGA) largely focus on hardware based optimizations such as pruning, quantization or hardware accelerator. However, most performance improvements leading to efficient solutions in terms of memory, complexity and energy are located at the NN topology level, prior to any implementation step. This paper introduces a methodology called ANN2T (Artificial Neural Network to Target) which adapts a pre-trained deep neural network to a designated device with given optimization constraints. ANN2T leverages its included simplifications and/or transformations to progressively modify the deep neural network layers in order to meet the optimization target. Experiment results obtained on microcontroller device show ANN2T produces valuable trade-offs. It achieved up to 33% MACC and 37% memory footprint reductions with no accuracy loss on ResNet-18 topology over the CIFAR-10 dataset. This fully-automated methodology could be generalized to targets such as CPUs, GPUs or FPGAs.