The use of standard datasets, models, and algorithms often incorporates and exacerbates social biases in systems that use machine learning and artificial intelligence. Context-aware design and implementation of automated decision making algorithms is, therefore, an important and necessary venture. Our group aims to mitigate social biases in AI, with a focus on providing feasible debiased alternatives to currently-used models.
People
Faculty -
Elisa Celis,
Nisheeth Vishnoi
Contributors -
Chris Hays, Lingxiao Huang, Vijay Keswani, Anay Mehrotra, Damian Straszak, Yi-Chern Tan, Julia Wei
Demos
Ranking
Prototype for gender-balanced rankings with applications to search engines, newsfeeds, and recommendation systems.
Classification
Python notebook for a meta fair classification algorithm, works for various fairness metrics. Deployed in IBM AIF 360.
Online Advertising
Prototype for gender-balanced and auction-based online advertising platform.
Debiasing Data
Python notebook for learning and evaluating unbiased maximum-entropy distributions from biased datasets.
Papers
Sorry, we can’t find that page that you’re looking for. You can try again by going back to the homepage.