About Me
Hello! I’m a CV Starr Postdoctoral Fellow at the Princeton Neuroscience Institute, where I am working with Uri Hasson, Ken Norman, and Danqi Chen. My work is at the intersection of cognitive neuroscience and machine learning. I am particularly interested in how humans and artificial language models represent and integrate information in language.
I obtained my PhD in EECS at UC Berkeley, where I was advised by Dan Klein and Jack Gallant, and was affiliated with BAIR. During my PhD, I also had the opportunity to work with wonderful collaborators on the Semantic Scholar team at the Allen Institute for AI and the Denizens Lab at TU Berlin. My PhD was funded by a NSF Graduate Research Fellowship and an IBM PhD Fellowship. Before going to Berkeley, I was a Fulbright Scholar studying causal inference in neuroimaging with Moritz Grosse-Wentrup at LMU Munich and MPI Tuebingen. Before that, I earned a BSE in Computer Science at Princeton with minors in Neuroscience, Statistics and Machine Learning, and Applied Math, and was fortunate to work with Ken Norman, Elad Hazan, and Zahra Aminzare.
Outside of research I like to run, swim, and learn languages.
Publications
Bilingual Language Processing Relies on Shared Semantic Representations that are Modulated by Each Language
Catherine Chen, Lily Gong, Christine Tseng, Daniel Klein, Jack Gallant, Fatma Deniz.
The Cortical Representation of Language Timescales is Shared between Reading and Listening
Catherine Chen, Tom Dupré la Tour, Jack Gallant, Daniel Klein, and Fatma Deniz.
Nature Communications Biology 2024
brainviewer code
Re-evaluating the Need for Multimodal Signals in Unsupervised Grammar Induction
Boyi Li,* Rodolfo Corona,* Karttikeya Mangalam,* Catherine Chen,* Daniel Flaherty, Serge Belongie, Kilian Q. Weinberger, Jitendra Malik, Trevor Darrell, Dan Klein.
NAACL Findings 2024
Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents
Catherine Chen, Zejiang Shen, Dan Klein, Gabriel Stanovsky, Doug Downey, Kyle Lo.
ACL Findings 2023
code
Attention Weights Accurately Predict Language Representations in the Brain
Mathis Lamarre, Catherine Chen, Fatma Deniz.
EMNLP Findings 2022
Constructing Taxonomies from Pretrained Language Models
Catherine Chen,* Kevin Lin,* Dan Klein.
NAACL 2021
code
Learning to Perform Role-Filler Binding with Schematic Knowledge
Catherine Chen, Qihong Lu, Andre Beukers, Christopher Baldassano, Kenneth A. Norman.
PeerJ 2021
code
Conference Presentations and Technical Reports
The Cortical Representation of Lexical Semantics is Shared across English and Chinese
Catherine Chen,* Lily Gong,* Christine Tseng, Daniel Klein, Jack Gallant, Fatma Deniz.
Society for the Neurobiology of Language (SNL) 2022
The Cortical Representation of Linguistic Structures at Different Timescales is Shared between Reading and Listening
Catherine Chen, Tom Dupré la Tour, Jack Gallant, Daniel Klein, Fatma Deniz.
Conference on Cognitive Computational Neuroscience (CCN) 2022
Representations of Linguistic Information at Different Timescales in the Human Brain During Reading and Listening
Catherine Chen, Tom Dupre la Tour, Jack Gallant, Dan Klein, Fatma Deniz.
Society for Neuroscience (SfN) 2021
Generalized Schema Learning by Neural Networks
Catherine Chen, Qihong Lu, Andre Beukers, Christopher Baldassano, Kenneth A. Norman.
Conference on Cognitive Computational Neuroscience (CCN) 2018
poster
A Temporal Decay Model for Mapping between fMRI and Natural Language Annotations
Kiran Vodrahalli, Catherine Chen, Viola Mocz, Chris Baldassano, Uri Hasson, Sanjeev Arora, Kenneth A. Norman
Conference on Cognitive Computational Neuroscience (CCN) 2017
Decision Making in Heterogeneous Drift Diffusion Networks
PACM Undergraduate Certificate Colloquium 2018
presentation
*=equal contribution