Repository logo
  • Log In
    Log in via Symplectic to deposit your publication(s).
Repository logo
  • Communities & Collections
  • Research Outputs
  • Statistics
  • Log In
    Log in via Symplectic to deposit your publication(s).
  1. Home
  2. Faculty of Engineering
  3. Faculty of Engineering
  4. Augmenting Softmax information for selective classification with out-of-distribution data
 
  • Details
Augmenting Softmax information for selective classification with out-of-distribution data
File(s)
accv_sub.pdf (833.58 KB)
Accepted version
Author(s)
Xia, Guoxuan
Bouganis, Christos-Savvas
Type
Conference Paper
Abstract
Detecting out-of-distribution (OOD) data is a task that is receiving an increasing amount of research attention in the domain of deep learning for computer vision. However, the performance of detection methods is generally evaluated on the task in isolation, rather than also considering potential downstream tasks in tandem. In this work, we examine selective classification in the presence of OOD data (SCOD). That is to say, the motivation for detecting OOD samples is to reject them so their impact on the quality of predictions is reduced. We show under this task specification, that existing post-hoc methods perform quite differently compared to when evaluated only on OOD detection. This is because it is no longer an issue to conflate in-distribution (ID) data with OOD data if the ID data is going to be misclassified. However, the conflation within ID data of correct and incorrect predictions becomes undesirable. We also propose a novel method for SCOD, Softmax Information Retaining Combination (SIRC), that augments softmax-based confidence scores with feature-agnostic information such that their ability to identify OOD samples is improved without sacrificing separation between correct and incorrect ID predictions. Experiments on a wide variety of ImageNet-scale datasets and convolutional neural network architectures show that SIRC is able to consistently match or outperform the baseline for SCOD, whilst existing OOD detection methods fail to do so. Code is available at https://github.com/Guoxoug/SIRC.
Date Issued
2023-02-25
Date Acceptance
2023-02-01
Citation
Computer Vision – ACCV 2022: 16th Asian Conference on Computer Vision, Macao, China, December 4–8, 2022, Proceedings, Part VI, 2023, 13846, pp.664-680
URI
http://hdl.handle.net/10044/1/104862
URL
https://link.springer.com/chapter/10.1007/978-3-031-26351-4_40
DOI
https://www.dx.doi.org/10.1007/978-3-031-26351-4_40
ISBN
9783031263507
ISSN
0302-9743
Publisher
Springer Nature Switzerland
Start Page
664
End Page
680
Journal / Book Title
Computer Vision – ACCV 2022: 16th Asian Conference on Computer Vision, Macao, China, December 4–8, 2022, Proceedings, Part VI
Volume
13846
Copyright Statement
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG. This version of the contribution has been accepted for publication, after peer review (when applicable) but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: http://dx.doi.org/10.1007/978-3-031-26351-4_40. Use of this Accepted Version is subject to the publisher’s Accepted Manuscript terms of use https://www.springernature.com/gp/open-research/policies/accepted-manuscript-terms
Identifier
https://link.springer.com/chapter/10.1007/978-3-031-26351-4_40
Source
16th Asian Conference on Computer Vision
Publication Status
Published
Start Date
2022-12-04
Finish Date
2022-12-08
Coverage Spatial
Macao, China
Date Publish Online
2023-02-26
About
Spiral Depositing with Spiral Publishing with Spiral Symplectic
Contact us
Open access team Report an issue
Other Services
Scholarly Communications Library Services
logo

Imperial College London

South Kensington Campus

London SW7 2AZ, UK

tel: +44 (0)20 7589 5111

Accessibility Modern slavery statement Cookie Policy

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback