The use of standard datasets, models, and algorithms often incorporates and exacerbates social biases in systems that use machine learning and artificial intelligence. Context-aware design and implementation of automated decision making algorithms is, therefore, an important and necessary venture. Our group aims to mitigate social biases in AI, with a focus on providing feasible debiased alternatives to currently-used models.

People

Faculty - Elisa Celis, Nisheeth Vishnoi
Contributors - Chris Hays, Lingxiao Huang, Vijay Keswani, Anay Mehrotra, Damian Straszak, Yi-Chern Tan, Julia Wei

Demos

Data Summarization
Prototype for gender-balanced image search.

Polarization
Prototype for politically-balanced and personalized newsfeeds. [Video]

Multiwinner Voting
Elect a committee that is balanced across different attribute types. Deployed in Swiss elections. [Video]

Ranking
Prototype for gender-balanced rankings with applications to search engines, newsfeeds, and recommendation systems.

Classification
Python notebook for a meta fair classification algorithm, works for various fairness metrics. Deployed in IBM AIF 360.

Online Advertising
Prototype for gender-balanced and auction-based online advertising platform.

Debiasing Data
Python notebook for learning and evaluating unbiased maximum-entropy distributions from biased datasets.

Papers