I am a PhD candidate in Computer Science in the Stanford NLP group. My research focuses on understanding and improving the social outcomes of NLP technologies, such as addressing anthropomorphism, social biases, and other implicit perceptions. I am advised by Dan Jurafsky and supported by the Knight-Hennessy Scholarship and the NSF Graduate Research Fellowship.
Previously, I did my undergraduate at Caltech, where I double-majored in computer science and history. I’ve also spent time at Microsoft Research (on the FATE team with Alexandra Olteanu and Su Lin Blodgett, and with Adam Kalai) and DeepMind.
My email is myra [at] cs [dot] stanford [dot] edu.
May 2025: Our work on social sycophancy is featured in MIT Technology Review!
May 2025: Two papers on measuring and mitigating anthropomorphic LLM outputs accepted to ACL 2025.
April 2025: Our paper on Using metaphors to understand public perceptions of AI accepted to FAccT 2025.
October 2024: Attending AIES.
October 2024: “I Am the One and Only, Your Cyber BFF”: Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI.