About Me


Hello! I am a machine learning research scientist at Abridge.

Previously, I was a postdoc at the Princeton Neuroscience Institute, where I worked with Uri Hasson, Ken Norman, and Danqi Chen. I obtained my PhD in EECS at UC Berkeley, where I was advised by Dan Klein and Jack Gallant, and was affiliated with BAIR. During my PhD, I also had the opportunity to work with wonderful collaborators on the Semantic Scholar team at the Allen Institute for AI and at the Denizens Lab at TU Berlin. I earned a BSE in Computer Science at Princeton with minors in Neuroscience, Statistics and Machine Learning, and Applied Math, and was fortunate to work with Ken Norman, Elad Hazan, and Zahra Aminzare. I am grateful to have received support from the CV Starr Postdoctoral Fellowship, NSF Graduate Research Fellowship, IBM PhD Fellowship, and Fulbright Fellowship.

Outside of research I like to run, bike, and learn languages.

Selected Publications

Bilingual Language Processing Relies on Shared Semantic Representations that are Modulated by Each Language
Catherine Chen, Lily Gong, Christine Tseng, Daniel Klein, Jack Gallant, Fatma Deniz.
PNAS 2026
brainviewer code

The Cortical Representation of Language Timescales is Shared between Reading and Listening
Catherine Chen, Tom Dupré la Tour, Jack Gallant, Daniel Klein, and Fatma Deniz.
Nature Communications Biology 2024
brainviewer code

Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents
Catherine Chen, Zejiang Shen, Dan Klein, Gabriel Stanovsky, Doug Downey, Kyle Lo.
ACL Findings 2023
code

Constructing Taxonomies from Pretrained Language Models
Catherine Chen,* Kevin Lin,* Dan Klein.
NAACL 2021
code

*=equal contribution