The use of standard datasets, models, and algorithms often incorporates and exacerbates social biases in systems that use machine learning and artificial intelligence. Context-aware design and implementation of automated decision making algorithms is, therefore, an important and necessary venture. Our group aims to mitigate social biases in AI, with a focus on providing feasible debiased alternatives to currently-used models.
People
Faculty -
Elisa Celis,
Nisheeth Vishnoi
Postdocs -
Lingxiao Huang
PhD students -
Vijay Keswani,
Anay Mehrotra
Alumni - Chris Hays, Sayash Kapoor, Farnood Salehi, Damian Straszak, Yi Chern Tan, Julia Wei