Stochastic learning under random reshuffling with constant step-sizes
OA Location
Author(s)
Ying, Bicheng
Yuan, Kun
Vlaski, Stefan
Sayed, Ali H
Type
Journal Article
Abstract
In empirical risk optimization, it has been observed that stochastic gradient implementations that rely on random reshuffling of the data achieve better performance than implementations that rely on sampling the data uniformly. Recent works have pursued justifications for this behavior by examining the convergence rate of the learning process under diminishing step sizes. This work focuses on the constant step-size case and strongly convex loss functions. In this case, convergence is guaranteed to a small neighborhood of the optimizer albeit at a linear rate. The analysis establishes analytically that random reshuffling outperforms uniform sampling by showing explicitly that iterates approach a smaller neighborhood of size O(μ 2 ) around the minimizer rather than O(μ). Furthermore, we derive an analytical expression for the steady-state mean-square-error performance of the algorithm, which helps clarify in greater detail, the differences between sampling with and without replacement. We also explain the periodic behavior that is observed in random reshuffling implementations.
Date Issued
2019-01-15
Date Acceptance
2018-10-08
Citation
IEEE Transactions on Signal Processing, 2019, 67 (2), pp.474-489
ISSN
1053-587X
Publisher
Institute of Electrical and Electronics Engineers
Start Page
474
End Page
489
Journal / Book Title
IEEE Transactions on Signal Processing
Volume
67
Issue
2
Copyright Statement
© 2018 IEEE.
Identifier
https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000454244300001&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=a2bf6146997ec60c407a63945d4e92bb
Subjects
convergence analysis
Engineering
Engineering, Electrical & Electronic
mean-square performance
mean-square-error expression
Random reshuffling
Science & Technology
stochastic gradient descent
Technology
Publication Status
Published
Date Publish Online
2018-10-29