Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going
OA Location
Author(s)
Type
Journal Article
Abstract
Deep neural networks have proven to be particularly effective in visual and audio recognition tasks. Existing models tend to be computationally expensive and memory intensive, however, and so methods for hardware-oriented approximation have become a hot topic. Research has shown that custom hardware-based neural network accelerators can surpass their general-purpose processor equivalents in terms of both throughput and energy efficiency. Application-tailored accelerators, when co-designed with approximation-based network training methods, transform large, dense and computationally expensive networks into small, sparse and hardware-efficient alternatives, increasing the feasibility of network deployment. In this article, we provide a comprehensive evaluation of approximation methods for high-performance network inference along with in-depth discussion of their effectiveness for custom hardware implementation. We also include proposals for future research based on a thorough analysis of current trends. This article represents the first survey providing detailed comparisons of custom hardware accelerators featuring approximation for both convolutional and recurrent neural networks, through which we hope to inspire exciting new developments in the field.
Date Issued
2019-05-31
Date Acceptance
2019-01-16
Citation
ACM Computing Surveys, 2019, 52 (2), pp.40:1-40:39
ISSN
0360-0300
Publisher
Association for Computing Machinery
Start Page
40:1
End Page
40:39
Journal / Book Title
ACM Computing Surveys
Volume
52
Issue
2
Copyright Statement
© 2019 Association for Computing Machinery
Sponsor
Engineering & Physical Science Research Council (E
Engineering & Physical Science Research Council (EPSRC)
Engineering & Physical Science Research Council (E
Engineering & Physical Science Research Council (EPSRC)
Commission of the European Communities
Royal Academy Of Engineering
Imagination Technologies Ltd
Identifier
https://arxiv.org/abs/1901.06955
Grant Number
11908 (EP/K034448/1)
EP/P010040/1
516075101 (EP/N031768/1)
EP/I012036/1
671653
Prof Constantinides Chair
Prof Constantinides Chair
Subjects
Science & Technology
Technology
Computer Science, Theory & Methods
Computer Science
FPGAs
ASICs
approximation methods
convolutional neural networks
recurrent neural networks
cs.CV
cs.CV
08 Information and Computing Sciences
Information Systems
Publication Status
Published
Article Number
40
Date Publish Online
2019-05-31