Scalable supervised machine learning apparatus for computationally constrained devices
Résumé
Computationally constrained devices are devices with typically low resources / computational power built for specific tasks. At the same time, recent advances in machine learning, e.g., deep learning or hierarchical or cascade compositions of machines, that allow to accurately predict / classify some values of interest such as quality, trust, etc., require high computational power. Often, such complicated machine learning configurations are possible due to advances in processing units, e.g., Graphical Processing Units (GPUs). Computationally constrained devices can also benefit from such advances and an immediate question arises: how? This paper is devoted to reply the stated question. Our approach proposes to use scalable representations of ‘trained' models through the synthesis of logic circuits. Furthermore, we showcase how a cascade machine learning composition can be achieved by using ‘traditional' digital electronic devices. To validate our approach, we present a set of prelimin ary experimental studies that show how different circuit apparatus clearly outperform (in terms of processing speed and resource consumption) current machine learning software implementations