Learning Time/Memory-Efficient Deep Architectures with Budgeted Super Networks - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

Learning Time/Memory-Efficient Deep Architectures with Budgeted Super Networks

Résumé

We propose to focus on the problem of discovering neural network architectures efficient in terms of both prediction quality and cost. For instance, our approach is able to solve the following tasks: learn a neural network able to predict well in less than 100 milliseconds or learn an efficient model that fits in a 50 Mb memory. Our contribution is a novel family of models called Budgeted Super Networks (BSN). They are learned using gradient descent techniques applied on a budgeted learning objective function which integrates a maximum authorized cost, while making no assumption on the nature of this cost. We present a set of experiments on computer vision problems and analyze the ability of our technique to deal with three different costs: the computation cost, the memory consumption cost and a distributed computation cost. We particularly show that our model can discover neural network architectures that have a better accuracy than the ResNet and Convolutional Neural Fabrics architectures on CIFAR-10 and CIFAR-100, at a lower cost.

Dates et versions

hal-03276799 , version 1 (02-07-2021)

Identifiants

Citer

Tom Veniat, Ludovic Denoyer. Learning Time/Memory-Efficient Deep Architectures with Budgeted Super Networks. 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Jun 2018, Salt Lake City, United States. pp.3492-3500, ⟨10.1109/CVPR.2018.00368⟩. ⟨hal-03276799⟩
24 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More