Repository logo
  • Log In
    Log in via Symplectic to deposit your publication(s).
Repository logo
  • Communities & Collections
  • Research Outputs
  • Statistics
  • Log In
    Log in via Symplectic to deposit your publication(s).
  1. Home
  2. Faculty of Engineering
  3. Faculty of Engineering
  4. Recycling privileged learning and distribution matching for fairness
 
  • Details
Recycling privileged learning and distribution matching for fairness
File(s)
QuaSha17.pdf (810.94 KB)
Published version
Author(s)
Quadrianto, Novi
Sharmanska, Viktoriia
Type
Conference Paper
Abstract
Equipping machine learning models with ethical and legal constraints is a serious issue; without this, the future of machine learning is at risk. This paper takes a step forward in this direction and focuses on ensuring machine learning models deliver fair decisions. In legal scholarships, the notion of fairness itself is evolving and multi-faceted. We set an overarching goal to develop a unified machine learning framework that is able to handle any definitions of fairness, their combinations, and also new definitions that might be stipulated in the future. To achieve our goal, we recycle two well-established machine learning techniques, privileged learning and distribution matching, and harmonize them for satisfying multi-faceted fairness definitions. We consider protected characteristics such as race and gender as privileged information that is available at training but not at test time; this accelerates model training and delivers fairness through unawareness. Further, we cast demographic parity, equalized odds, and equality of opportunity as a classical two-sample problem of conditional distributions, which can be solved in a general form by using distance measures in Hilbert Space. We show several existing models are special cases of ours. Finally, we advocate returning the Pareto frontier of multi-objective minimization of error and unfairness in predictions. This will facilitate decision makers to select an operating point and to be accountable for it.
Date Issued
2017-12-04
Date Acceptance
2017-12-04
Citation
Advances in Neural Information Processing Systems 30 (NIPS 2017), 2017
URI
http://hdl.handle.net/10044/1/64239
Publisher
Neural Information Processing Systems Foundation, Inc.
Journal / Book Title
Advances in Neural Information Processing Systems 30 (NIPS 2017)
Copyright Statement
© 2017 The Author(s)
Identifier
http://papers.nips.cc/paper/6670-recycling-privileged-learning-and-distribution-matching-for-fairness.pdf
Source
Advances in Neural Information Processing Systems (NIPS)
Subjects
Machine Learning
Pattern Recognition, Automated
Publication Status
Published
Start Date
2017-12-04
Finish Date
2017-12-09
Coverage Spatial
Long Beach, USA
About
Spiral Depositing with Spiral Publishing with Spiral Symplectic
Contact us
Open access team Report an issue
Other Services
Scholarly Communications Library Services
logo

Imperial College London

South Kensington Campus

London SW7 2AZ, UK

tel: +44 (0)20 7589 5111

Accessibility Modern slavery statement Cookie Policy

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback