Niloufar Salehi photo

Niloufar Salehi


Assistant Professor, School of Information, UC-Berkeley
Affiliated appointment, EECS

Curriculum Vitae · Google Scholar
nsalehi[@]berkeley.edu

Teaching

PhD students Updates

Aug 2021: Two NSF grants: DASS: Legally & Locally Legitimate: Designing & Evaluating Software Systems to Advance Equal Opportunity , $750k with Catherine Albiston and Afshin Nikzad and FOW: Human-Machine Teaming for Effective Data Work at Scale: Upskilling Defense Lawyers Working with Police and Court Process Data, $2m with Aditya Parameswaran, Sarah Chasins, Joseph Hellerstein, and Erin Kerrison

Aug 2021: I will be joining Deirdre Mulligan to co-direct the Algorithmic Fairness and Opacity Group (AFOG) at Berkeley.

Jan 2021: Tonya Nguyen, Darya Kaviani, Liza Gak, and Seyi Olojo win CTSP fellowhips for our research on scalability and community resilience of mutual aid networks and the harms of targeted diet ads. Congrats!


Potential students

Motivated, passionate students of any level who like this work and want to get involved are always fun to talk to. Drop me an e-mail that makes clear why we're a good fit, why you're interested, what your goals and ideas are, and what you want to contribute. It won't always work out - at times I have more or less need, funding, and advising energy.

Students currently enrolled at Berkeley who are interested in working on research with me will need to first take my Info 217: HCI research class. This is a class I teach every fall that will give you a strong foundation of what HCI research is and what it's methods are. I have very rarely worked with students not at Berkeley, usually it would require that you already have research experience and are working on related topics.

Prospective students applying to our PhD program should know that I do not plan on advising any new students in the 2022 academic year.





Niloufar Salehi is an Assistant Professor at the School of Information at UC, Berkeley, with an affiliated appointment in EECS. Her research interests are in social computing, participatory and critical design, human-centered AI, and more broadly, human-computer-interaction (HCI). Her work has been published and received awards in premier venues in HCI including ACM CHI and CSCW. Through building computational social systems in collaboration with existing communities, controlled experiments, and ethnographic fieldwork, her research contributes the design of alternative social configurations online.

Recent Publications

Modeling Assumptions Clash with the Real World: Transparency, Equity, and Community Challenges for Student Assignment Algorithms
Samantha Robertson, Tonya Nguyen, Niloufar Salehi, ACM CHI 2021

Whither AutoML? Understanding the Role of Automation in Machine Learning Workflows
Doris Xin, Eva Yiwei Wu, Doris Jung-Lin Lee, Niloufar Salehi, Aditya Parameswaran, ACM CHI 2021

Do No Harm
Niloufar Salehi, Logic Magazine 2020

Random, Messy, Funny, Raw: Finstas as Intimate Reconfigurations of Social Media
Best Paper Honorable Mention
Sijia Xiao, Danaë Metaxa, Joon Sung Park, Karrie Karahalios, Niloufar Salehi, ACM CHI 2020

Research Highlights

My group studies and designs social computing systems. Ongoing projects:
How can we make machine translation tools more adaptable to a user context and needs?
How might we design student assignment algorithms to better align with community values?
What would a Restorative and Transformative Justice approach to moderation and governance of online platforms look like?


Community-Centered Algorithm Design

publication: Modeling Assumptions Clash with the Real World: Transparency, Equity, and Community Challenges for Student Assignment Algorithms Samantha Robertson, Tonya Nguyen, Niloufar Salehi, ACM CHI 2021 [slides]

Student assignment algorithms were designed to meet school district values based on modeling assumptions (blue/top) that clash with the constraints of the real world (red/bottom). Students are expected to have predefined preferences overall schools, which they report truthfully. The procedure is intended to be easy to explain and optimally satisfies student preferences. In practice however, these assumptions clash with the real world characterized by unequal access to information, resource constraints (e.g. commuting), and distrust.

Student assignment algorithms were designed to meet school district values based on modeling assumptions (blue/top) that clash with the constraints of the real world (red/bottom). Students are expected to have predefined preferences overall schools, which they report truthfully. The procedure is intended to be easy to explain and optimally satisfies student preferences. In practice however, these assumptions clash with the real world characterized by unequal access to information, resource constraints (e.g. commuting), and distrust.

Across the United States, a growing number of school districts are turning to matching algorithms to assign students to public schools. The designers of these algorithms aimed to promote values such as transparency, equity, and community in the process. However, school districts have encountered practical challenges in their deployment. In fact, San Francisco Unified School District voted to stop using and completely redesign their student assignment algorithm because it was frustrating for families and it was not promoting educational equity in practice. We analyze this system using a Value Sensitive Design approach and find that one reason values are not met in practice is that the system relies on modeling assumptions about families’ priorities, constraints, and goals that clash with the real world. These assumptions overlook the complex barriers to ideal participation that many families face, particularly because of socioeconomic inequalities. We argue that direct, ongoing engagement with stakeholders is central to aligning algorithmic values with real world conditions. In doing so we must broaden how we evaluate algorithms while recognizing the limitations of purely algorithmic solutions in addressing complex socio-political problems.