Niloufar Salehi photo

Niloufar Salehi


Assistant Professor, School of Information, UC-Berkeley
Affiliated appointment, EECS

Curriculum Vitae · Google Scholar
n[at]niloufar.org

Teaching

PhD students Updates

27 Jan 2021

Tonya Nguyen, Darya Kaviani, Liza Gak, and Seyi Olojo win CTSP fellowhips for our research on scalability and community resilience of mutual aid networks and the harms of targeted diet ads. Congrats!

29 Oct 2020

Our work on school assignment algorithms won the New Horizons award at MD4SG 2021!





Niloufar Salehi is an Assistant Professor at the School of Information at UC, Berkeley, with an affiliated appointment in EECS. Her research interests are in social computing, participatory and critical design, human-centered AI, and more broadly, human-computer-interaction (HCI). Her work has been published and received awards in premier venues in HCI including ACM CHI and CSCW. Through building computational social systems in collaboration with existing communities, controlled experiments, and ethnographic fieldwork, her research contributes the design of alternative social configurations online.

Recent Publications

new Modeling Assumptions Clash with the Real World: Transparency, Equity, and Community Challenges for Student Assignment Algorithms
Samantha Robertson, Tonya Nguyen, Niloufar Salehi, ACM CHI 2021

new Whither AutoML? Understanding the Role of Automation in Machine Learning Workflows
Doris Xin, Eva Yiwei Wu, Doris Jung-Lin Lee, Niloufar Salehi, Aditya Parameswaran, ACM CHI 2021

Random, Messy, Funny, Raw: Finstas as Intimate Reconfigurations of Social Media
Best Paper Honorable Mention
Sijia Xiao, Danaë Metaxa, Joon Sung Park, Karrie Karahalios, Niloufar Salehi, ACM CHI 2020

Agent, Gatekeeper, Drug Dealer: How Content Creators Craft Algorithmic Personas
Eva Yiwei Wu, Emily Pedersen, Niloufar Salehi, ACM CSCW 2019

Research Highlights

My group studies and designs social computing systems. Ongoing projects:
How can we make machine translation tools more adaptable to a user context and needs?
How might we design student assignment algorithms to better align with community values?
What would a Restorative and Transformative Justice approach to moderation and governance of online platforms look like?


Community-Centered Algorithm Design

publication: Modeling Assumptions Clash with the Real World: Transparency, Equity, and Community Challenges for Student Assignment Algorithms Samantha Robertson, Tonya Nguyen, Niloufar Salehi, ACM CHI 2021 [slides]

Student assignment algorithms were designed to meet school district values based on modeling assumptions (blue/top) that clash with the constraints of the real world (red/bottom). Students are expected to have predefined preferences overall schools, which they report truthfully. The procedure is intended to be easy to explain and optimally satisfies student preferences. In practice however, these assumptions clash with the real world characterized by unequal access to information, resource constraints (e.g. commuting), and distrust.

Student assignment algorithms were designed to meet school district values based on modeling assumptions (blue/top) that clash with the constraints of the real world (red/bottom). Students are expected to have predefined preferences overall schools, which they report truthfully. The procedure is intended to be easy to explain and optimally satisfies student preferences. In practice however, these assumptions clash with the real world characterized by unequal access to information, resource constraints (e.g. commuting), and distrust.

Across the United States, a growing number of school districts are turning to matching algorithms to assign students to public schools. The designers of these algorithms aimed to promote values such as transparency, equity, and community in the process. However, school districts have encountered practical challenges in their deployment. In fact, San Francisco Unified School District voted to stop using and completely redesign their student assignment algorithm because it was frustrating for families and it was not promoting educational equity in practice. We analyze this system using a Value Sensitive Design approach and find that one reason values are not met in practice is that the system relies on modeling assumptions about families’ priorities, constraints, and goals that clash with the real world. These assumptions overlook the complex barriers to ideal participation that many families face, particularly because of socioeconomic inequalities. We argue that direct, ongoing engagement with stakeholders is central to aligning algorithmic values with real world conditions. In doing so we must broaden how we evaluate algorithms while recognizing the limitations of purely algorithmic solutions in addressing complex socio-political problems.