About
Niloufar Salehi is an assistant professor in the School of Information at UC, Berkeley. She studies human-computer interaction, with her research spanning education to healthcare to restorative justice.
Her research interests are social computing, human-centered AI, and more broadly, human-computer interaction (HCI). Her work has been published and received awards in premier venues including ACM CHI, CSCW, and EMNLP and has been covered in Venture Beat, Wired, and the Guardian. She is a W. T. Grant Foundation scholar for her work on promoting equity in student assignment algorithms and is a member of the advisory board on generative AI at NVIDIA. She received her PhD in computer science from Stanford University in 2018.
Research
If you are a current UC Berkeley student who is interested in getting involved with the research described here please fill out this form [1].
Human-Centered AI: Machine Translation
This work focuses on developing technical methods for more reliable use of AI systems based on ML and LLMs. On example is Machine translation. Machine translation (e.g. Google Translate) has potential to remove language barriers and is widely used in hospitals in the U.S., but almost 20% of common medical phrases are mistranslated to Chinese, with 8% causing significant clinical harm. Examples of this work includes:
- Showing physicians a quality estimation model calibrated on medical text makes them more effective at identifying when to rely on a translation.
- Designing affordances that aid users in understanding when to rely on machine translation.
- Studying how machine translation is currently used in high stakes medical settings (STAT news).
- Using a combination of verified dictionaries and ML to increase the reliability of machine translation in high-stakes situations. In this work we build on “example-based translation” to develop evaluation methods.
The long term goal of this research effort is to develop new approaches to design and evaluate reliable and effective AI systems in high-stakes, real-world contexts such as machine translation in medical settings.
Community Centered Algorithm Design: School Assignment
There is growing awareness around the impact that algorithmic systems have on people. An open question is how algorithmic systems can be designed to center the needs and values of the communities they impact. In our work we study a matching algorithm that assigns students to public schools across the U.S. Examples of this work include:
- Why student assignment systems have fallen short of their promised goals of transparency and equity in practice? They make modeling assumptions that clash with the real world.
- Can information technologies help lower resourced parents submit more informed preferences?
- How the design of a preference language shapes the opportunities for meaningful participation depending on the costs, expressiveness, and collectivism of the language.
- How can we implement elements of procedural justice (voice, agency, helpfulness) within algorithmic system design?
Ultimately, our goal is to develop methods and best practices to engage parents and policy makers in designing algorithmic systems.
Restorative Justice Approaches to Addressing Online Harm
Harms such as harassment are notorious online but extremely difficult to address effectively. Dominant models of content moderation leave out victims and their needs and instead focus on punishing offenders. We take restorative justice as an alternative approach to ask: Who’s been harmed? What are their needs? And whose obligation is it to meet those needs? Examples of this work include:
- What do adolescents need when they are harmed online? Sensemaking, support, safety, retribution, and transformation.
- How targeted diet ads harm people with histories of disordered eating, particularly when it’s hard to get rid of them even during recovery.
- How online counter-publics (such as by Muslim American public figures) struggle to engage externally on social media platforms due to the scale of harm that spreads across time and space.
- How might we better support people who have been harmed online to engage in sensemaking about the harm and identifying actions and stakeholders that can support them?
- Why content moderation is a fundamentally limited model for addressing online harm and what an alternative based on community care might look like.
Our goal in this work is to design better mechanisms and tools to address online harm by centering the needs of those who are harmed and supporting those who have caused harm to take accountability and work to repair the harm.
Funding
My work is currently supported by the National Science Foundation under the following grants:
- FAI: A Human-Centered Approach to Developing Accessible and Reliable Machine Translation, with Marine Carpuat (PI) and Ge Gao, [article, award]
- DASS: Legally & Locally Legitimate: Designing & Evaluating Software Systems to Advance Equal Opportunity, with Catherine Albiston and Afshin Nikzad, [article, award]
- FOW: Human-Machine Teaming for Effective Data Work at Scale: Upskilling Defense Lawyers Working with Police and Court Process Data, with Aditya Parameswaran (PI), Sarah Chasins, Joseph Hellerstein, and Erin Kerrison, [article, press, award]
Additionally, I am grateful for support from the W. T. Grant Foundation, Google-BAIR commons and Facebook research.
[1] Unfortunately, we can not currently accept research assistants from outside the university, but there are some great summer undergraduate research opportunities at CMU HCII, UC, San Diego, and University of Washington among others. You can find a list of NSF supported undergraduate research opportunities in computer and information sciences here.