The use of standard datasets, models, and algorithms often incorporates and exacerbates social biases in systems that use machine learning and artificial intelligence. Context-aware design and implementation of automated decision making algorithms is, therefore, an important and necessary venture. Our group aims to mitigate social biases in AI, with a focus on providing feasible debiased alternatives to currently-used models.


Faculty - Elisa Celis, Nisheeth Vishnoi
Postdocs - Lingxiao Huang
PhD students - Vijay Keswani, Anay Mehrotra
Undergraduate students - Chris Hays, Yi Chern Tan
Alumni - Sayash Kapoor, Farnood Salehi, Damian Straszak, Julia Wei


Data Summarization
Prototype for gender-balanced image search.

Prototype for politically-balanced and personalized newsfeeds. [Video]

Multiwinner Voting
Elect a committee that is balanced across different attribute types. Deployed in Swiss elections. [Video]

Prototype for gender-balanced rankings with applications to search engines, newsfeeds, and recommendation systems.

Python notebook for a meta fair classification algorithm, works for various fairness metrics. Deployed in IBM AIF 360.

Online Advertising
Prototype for gender-balanced and auction-based online advertising platform.

Debiasing Data
Python notebook for learning and evaluating unbiased maximum-entropy distributions from biased datasets.


Sorry, we can’t find that page that you’re looking for. You can try again by going back to the homepage.

Constructocat by

Related Initiatives