NLPExplorer
Papers
Venues
Authors
Authors Timeline
Field of Study
URLs
ACL N-gram Stats
TweeNLP
API
Team
Tal Linzen
Number of Papers:- 60
Number of Citations:- 100
First ACL Paper:- 2014
Latest ACL Paper:- 2024
Venues:-
NAACL
CoNLL
L
P
BlackboxNLP
s
EMNLP
-
Findings
E
C
RepEval
M
CMCL
g
CL
TACL
A
d
WS
ACL
SCiL
EACL
*SEMEVAL
i
IJCNLP
N
F
n
Co-Authors:-
Aaron Mueller
Adam Poliak
Adina Williams
Aditya Yedetore
Afra Alishahi
Similar Authors:-
Cesar Reis
Necati Ercan Ozgencil
Zygmunt Krynicki
Allan Berrocal Rojas
Barrett R Bryant
2024
2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
Can You Learn Semantics Through Next-Word Prediction? The Case of Entailment
F
i
n
d
i
n
g
s
-
A
C
L
William Merrill |
Zhaofeng Wu |
Norihito Naka |
Yoon Kim |
Tal Linzen |
The Impact of Depth on Compositional Generalization in Transformer Language Models
NAACL
Jackson Petty |
Sjoerd Steenkiste |
Ishita Dasgupta |
Fei Sha |
Dan Garrette |
Tal Linzen |
A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models
NAACL
Tiwalayo Eisape |
Michael Tessler |
Ishita Dasgupta |
Fei Sha |
Sjoerd Steenkiste |
Tal Linzen |
In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax
NAACL
Aaron Mueller |
Albert Webson |
Jackson Petty |
Tal Linzen |
Do Language Models’ Words Refer?
CL
Matthew Mandelkern |
Tal Linzen |
SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser
CoNLL
Grusha Prasad |
Tal Linzen |
SLOG: A Structural Generalization Benchmark for Semantic Parsing
EMNLP
Bingzhi Li |
Lucia Donatelli |
Alexander Koller |
Tal Linzen |
Yuekun Yao |
Najoung Kim |
How Much Do Language Models Copy From Their Training Data? Evaluating Linguistic Novelty in Text Generation Using RAVEN
TACL
R. Thomas McCoy |
Paul Smolensky |
Tal Linzen |
Jianfeng Gao |
Asli Celikyilmaz |
Verb Conjugation in Transformers Is Determined by Linear Encodings of Subject Number
F
i
n
d
i
n
g
s
-
E
M
N
L
P
Sophie Hao |
Tal Linzen |
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
CoNLL
Alex Warstadt |
Aaron Mueller |
Leshem Choshen |
Ethan Wilcox |
Chengxu Zhuang |
Juan Ciro |
Rafael Mosquera |
Bhargavi Paranjabe |
Adina Williams |
Tal Linzen |
Ryan Cotterell |
A Language Model with Limited Memory Capacity Captures Interference in Human Sentence Processing
F
i
n
d
i
n
g
s
-
E
M
N
L
P
William Timkey |
Tal Linzen |
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning
CoNLL
Alex Warstadt |
Aaron Mueller |
Leshem Choshen |
Ethan Wilcox |
Chengxu Zhuang |
Juan Ciro |
Rafael Mosquera |
Bhargavi Paranjabe |
Adina Williams |
Tal Linzen |
Ryan Cotterell |
How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech
ACL
Aditya Yedetore |
Tal Linzen |
Robert Frank |
R. Thomas McCoy |
How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases
ACL
Aaron Mueller |
Tal Linzen |
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
ACL
Findings
Aaron Mueller |
Robert Frank |
Tal Linzen |
Luheng Wang |
Sebastian Schuster |
Improving Compositional Generalization with Latent Structure and Data Augmentation
NAACL
Linlu Qiu |
Peter Shaw |
Panupong Pasupat |
Pawel Nowak |
Tal Linzen |
Fei Sha |
Kristina Toutanova |
When a sentence does not introduce a discourse entity, Transformer-based models still sometimes refer to it
NAACL
Sebastian Schuster |
Tal Linzen |
Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark
TACL
Nouha Dziri |
Hannah Rashkin |
Tal Linzen |
David Reitter |
Characterizing Verbatim Short-Term Memory in Neural Language Models
CoNLL
Kristijan Armeni |
Christopher Honey |
Tal Linzen |
Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities
CoNLL
Suhas Arehalli |
Brian Dillon |
Tal Linzen |
Entailment Semantics Can Be Extracted from an Ideal Language Model
CoNLL
William Merrill |
Alex Warstadt |
Tal Linzen |
Causal Analysis of Syntactic Agreement Neurons in Multilingual Language Models
CoNLL
Aaron Mueller |
Yu Xia |
Tal Linzen |
Frequency Effects on Syntactic Rule Learning in Transformers
EMNLP
Jason Wei |
Dan Garrette |
Tal Linzen |
Ellie Pavlick |
The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation
BlackboxNLP
EMNLP
Laura Aina |
Tal Linzen |
Does Putting a Linguist in the Loop Improve NLU Data Collection?
EMNLP
Findings
Alicia Parrish |
William Huang |
Omar Agha |
Soo-Hwan Lee |
Nikita Nangia |
Alexia Warstadt |
Karmanya Aggarwal |
Emily Allaway |
Tal Linzen |
Samuel R. Bowman |
NOPE: A Corpus of Naturally-Occurring Presuppositions in English
CoNLL
EMNLP
Alicia Parrish |
Sebastian Schuster |
Alex Warstadt |
Omar Agha |
Soo-Hwan Lee |
Zhuoye Zhao |
Samuel R. Bowman |
Tal Linzen |
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models
ACL
IJCNLP
Matthew Finlayson |
Aaron Mueller |
Sebastian Gehrmann |
Stuart Shieber |
Tal Linzen |
Yonatan Belinkov |
Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction
CoNLL
EMNLP
Shauli Ravfogel |
Grusha Prasad |
Tal Linzen |
Yoav Goldberg |
Cross-Linguistic Syntactic Evaluation of Word Prediction Models
ACL
Aaron Mueller |
Garrett Nicolai |
Panayiota Petrou-Zeniou |
Natalia Talmina |
Tal Linzen |
Syntactic Data Augmentation Increases Robustness to Inference Heuristics
ACL
Junghyun Min |
R. Thomas McCoy |
Dipanjan Das |
Emily Pitler |
Tal Linzen |
Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs
ACL
Michael Lepori |
Tal Linzen |
R. Thomas McCoy |
How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
ACL
Tal Linzen |
Does Syntax Need to Grow on Trees? Sources of Hierarchical Inductive Bias in Sequence-to-Sequence Networks
TACL
R. Thomas McCoy |
Robert Frank |
Tal Linzen |
Discovering the Compositional Structure of Vector Representations with Role Learning Networks
BlackboxNLP
EMNLP
Paul Soulos |
R. Thomas McCoy |
Tal Linzen |
Paul Smolensky |
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance
BlackboxNLP
EMNLP
R. Thomas McCoy |
Junghyun Min |
Tal Linzen |
COGS: A Compositional Generalization Challenge Based on Semantic Interpretation
EMNLP
Najoung Kim |
Tal Linzen |
Proceedings of the 24th Conference on Computational Natural Language Learning
CoNLL
Raquel Fernández |
Tal Linzen |
Quantity doesn’t buy quality syntax with neural language models
EMNLP
Marten van Schijndel |
Aaron Mueller |
Tal Linzen |
Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models
CoNLL
Grusha Prasad |
Marten van Schijndel |
Tal Linzen |
Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages
NAACL
Shauli Ravfogel |
Yoav Goldberg |
Tal Linzen |
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference
ACL
Tom McCoy |
Ellie Pavlick |
Tal Linzen |
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension
*SEMEVAL
Najoung Kim |
Roma Patel |
Adam Poliak |
Patrick Xia |
Alex Wang |
Tom McCoy |
Ian Tenney |
Alexis Ross |
Tal Linzen |
Benjamin Van Durme |
Samuel R. Bowman |
Ellie Pavlick |
Can Entropy Explain Successor Surprisal Effects in Reading?
SCiL
WS
Marten van Schijndel |
Tal Linzen |
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
NAACL
WS
Emmanuele Chersoni |
Cassandra Jacobs |
Alessandro Lenci |
Tal Linzen |
Laurent Prévot |
Enrico Santus |
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
ACL
WS
Tal Linzen |
Grzegorz Chrupała |
Yonatan Belinkov |
Dieuwke Hupkes |
Targeted Syntactic Evaluation of Language Models
EMNLP
Rebecca Marvin |
Tal Linzen |
A Neural Model of Adaptation in Reading
EMNLP
Marten van Schijndel |
Tal Linzen |
Colorless Green Recurrent Networks Dream Hierarchically
NAACL
Kristina Gulordava |
Piotr Bojanowski |
Edouard Grave |
Tal Linzen |
Marco Baroni |
Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018)
CMCL
WS
Asad Sayeed |
Cassandra Jacobs |
Tal Linzen |
Marten van Schijndel |
Phonological (un)certainty weights lexical activation
CMCL
WS
Laura Gwilliams |
David Poeppel |
Alec Marantz |
Tal Linzen |
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
EMNLP
WS
Tal Linzen |
Grzegorz Chrupała |
Afra Alishahi |
Comparing Character-level Neural Language Models Using a Lexical Decision Task
EACL
Gaël Le Godais |
Tal Linzen |
Emmanuel Dupoux |
Exploring the Syntactic Abilities of RNNs with Multi-task Learning
CoNLL
Émile Enguehard |
Yoav Goldberg |
Tal Linzen |
Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)
CMCL
WS
Ted Gibson |
Tal Linzen |
Asad Sayeed |
Martin van Schijndel |
William Schuler |
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
TACL
Tal Linzen |
Emmanuel Dupoux |
Yoav Goldberg |
Quantificational features in distributional word representations
*SEMEVAL
Tal Linzen |
Emmanuel Dupoux |
Benjamin Spector |
Issues in evaluating semantic spaces using word analogies
RepEval
WS
Tal Linzen |
Evaluating vector space models using human semantic priming results
RepEval
WS
Allyson Ettinger |
Tal Linzen |
A model of rapid phonotactic generalization
EMNLP
Tal Linzen |
Timothy O’Donnell |
Investigating the role of entropy in sentence processing
CMCL
WS
Tal Linzen |
Florian Jaeger |
Linguistic
Task
Approach
Language
Dataset Type
.