Repository logo
  • Log In
    Log in via Symplectic to deposit your publication(s).
Repository logo
  • Communities & Collections
  • Research Outputs
  • Statistics
  • Log In
    Log in via Symplectic to deposit your publication(s).
  1. Home
  2. Faculty of Engineering
  3. Electrical and Electronic Engineering
  4. Electrical and Electronic Engineering
  5. Now that I can see, I can improve: Enabling data-driven finetuning of CNNs on the edge
 
  • Details
Now that I can see, I can improve: Enabling data-driven finetuning of
CNNs on the edge
File(s)
2006.08554v1.pdf (714.53 KB)
Working paper
Author(s)
Rajagopal, Aditya
Bouganis, Christos-Savvas
Type
Working Paper
Abstract
In today's world, a vast amount of data is being generated by edge devices
that can be used as valuable training data to improve the performance of
machine learning algorithms in terms of the achieved accuracy or to reduce the
compute requirements of the model. However, due to user data privacy concerns
as well as storage and communication bandwidth limitations, this data cannot be
moved from the device to the data centre for further improvement of the model
and subsequent deployment. As such there is a need for increased edge
intelligence, where the deployed models can be fine-tuned on the edge, leading
to improved accuracy and/or reducing the model's workload as well as its memory
and power footprint. In the case of Convolutional Neural Networks (CNNs), both
the weights of the network as well as its topology can be tuned to adapt to the
data that it processes. This paper provides a first step towards enabling CNN
finetuning on an edge device based on structured pruning. It explores the
performance gains and costs of doing so and presents an extensible open-source
framework that allows the deployment of such approaches on a wide range of
network architectures and devices. The results show that on average, data-aware
pruning with retraining can provide 10.2pp increased accuracy over a wide range
of subsets, networks and pruning levels with a maximum improvement of 42.0pp
over pruning and retraining in a manner agnostic to the data being processed by
the network.
Date Issued
2020-06-15
Citation
2020
URI
http://hdl.handle.net/10044/1/80078
URL
http://arxiv.org/abs/2006.08554v1
Publisher
arXiv
Copyright Statement
© 2020 The Author(s)
Identifier
http://arxiv.org/abs/2006.08554v1
Subjects
cs.CV
cs.CV
cs.LG
Notes
Accepted for publication at CVPR2020 workshop - Efficient Deep Learning for Computer Vision
Publication Status
Published
About
Spiral Depositing with Spiral Publishing with Spiral Symplectic
Contact us
Open access team Report an issue
Other Services
Scholarly Communications Library Services
logo

Imperial College London

South Kensington Campus

London SW7 2AZ, UK

tel: +44 (0)20 7589 5111

Accessibility Modern slavery statement Cookie Policy

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback