NLPExplorer
Papers
Venues
Authors
Authors Timeline
Field of Study
URLs
ACL N-gram Stats
TweeNLP
API
Team
Colorless Green Recurrent Networks Dream Hierarchically
Kristina Gulordava
|
Piotr Bojanowski
|
Edouard Grave
|
Tal Linzen
|
Marco Baroni
|
Paper Details:
Month: June
Year: 2018
Location: New Orleans, Louisiana
Venue:
NAACL |
Citations
URL
RNN Simulations of Grammaticality Judgments on Long-distance Dependencies
Shammur Absar Chowdhury
|
Roberto Zamparelli
|
Targeted Syntactic Evaluation of Language Models
Rebecca Marvin
|
Tal Linzen
|
A Neural Model of Adaptation in Reading
Marten van Schijndel
|
Tal Linzen
|
The Importance of Being Recurrent for Modeling Hierarchical Structure
Ke Tran
|
Arianna Bisazza
|
Christof Monz
|
Vectorial Semantic Spaces Do Not Encode Human Judgments of Intervention Similarity
Paola Merlo
|
Francesco Ackermann
|
Universal Language Model Fine-tuning for Text Classification
Jeremy Howard
|
Sebastian Ruder
|
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better
Adhiguna Kuncoro
|
Chris Dyer
|
John Hale
|
Dani Yogatama
|
Stephen Clark
|
Phil Blunsom
|
Analysis Methods in Neural Language Processing: A Survey
Yonatan Belinkov
|
James Glass
|
LSTMs Exploit Linguistic Attributes of Data
Nelson F. Liu
|
Omer Levy
|
Roy Schwartz
|
Chenhao Tan
|
Noah A. Smith
|
Can LSTM Learn to Capture Agreement? The Case of Basque
Shauli Ravfogel
|
Yoav Goldberg
|
Francis Tyers
|
What do RNN Language Models Learn about Filler–Gap Dependencies?
Ethan Wilcox
|
Roger Levy
|
Takashi Morita
|
Richard Futrell
|
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items
Jaap Jumelet
|
Dieuwke Hupkes
|
Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information
Mario Giulianelli
|
Jack Harding
|
Florian Mohnert
|
Dieuwke Hupkes
|
Willem Zuidema
|
Probing sentence embeddings for structure-dependent tense
Geoff Bacon
|
Terry Regier
|
Representation of Word Meaning in the Intermediate Projection Layer of a Neural Language Model
Steven Derby
|
Paul Miller
|
Brian Murphy
|
Barry Devereux
|
Does Syntactic Knowledge in Multilingual Language Models Transfer Across Languages?
Prajit Dhar
|
Arianna Bisazza
|
Can Entropy Explain Successor Surprisal Effects in Reading?
Marten van Schijndel
|
Tal Linzen
|
Do RNNs learn human-like abstract word order preferences?
Richard Futrell
|
Roger P. Levy
|
https://github.com/
https://github.com/attardi/
http://u.cs.biu.ac.il/
https://github.com/pytorch/examples/
https://www.mturk.com/
https://openreview.net/group?
http://arxiv.org/
Field Of Study
Linguistic Trends
Embeddings
Syntax
Morphology
Neurolinguistics
Psycholinguistics
Task
Tagging
Language
Multilingual
English
Hebrew
Similar Papers
Planting Trees in the Desert: Delexicalized Tagging and Parsing Combined
Daniel Zeman
|
David Mareček
|
Zhiwei Yu
|
Zdeněk Žabokrtský
|
One-Shot Neural Cross-Lingual Transfer for Paradigm Completion
Katharina Kann
|
Ryan Cotterell
|
Hinrich Schütze
|
Isomorphic Transfer of Syntactic Structures in Cross-Lingual NLP
Edoardo Maria Ponti
|
Roi Reichart
|
Anna Korhonen
|
Ivan Vulić
|
Knowledge-Rich Morphological Priors for Bayesian Language Models
Victor Chahuneau
|
Noah A. Smith
|
Chris Dyer
|
A Comparative Study of Minimally Supervised Morphological Segmentation
Teemu Ruokolainen
|
Oskar Kohonen
|
Kairit Sirts
|
Stig-Arne Grönroos
|
Mikko Kurimo
|
Sami Virpioja
|