NLPExplorer
Papers
Venues
Authors
Authors Timeline
Field of Study
URLs
ACL N-gram Stats
TweeNLP
API
Team
What Do Recurrent Neural Network Grammars Learn About Syntax?
Adhiguna Kuncoro
|
Miguel Ballesteros
|
Lingpeng Kong
|
Chris Dyer
|
Graham Neubig
|
Noah A. Smith
|
Paper Details:
Month: April
Year: 2017
Location: Valencia, Spain
Venue:
EACL |
Citations
URL
Dissecting Contextual Word Embeddings: Architecture and Representation
Matthew Peters
|
Mark Neumann
|
Luke Zettlemoyer
|
Wen-tau Yih
|
Neural Discourse Structure for Text Categorization
Yangfeng Ji
|
Noah A. Smith
|
Distilling Knowledge for Search-based Structured Prediction
Yijia Liu
|
Wanxiang Che
|
Huaipeng Zhao
|
Bing Qin
|
Ting Liu
|
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better
Adhiguna Kuncoro
|
Chris Dyer
|
John Hale
|
Dani Yogatama
|
Stephen Clark
|
Phil Blunsom
|
Towards Neural Machine Translation with Latent Tree Attention
James Bradbury
|
Richard Socher
|
https://github.com/clab/rnng/tree/
Field Of Study
Linguistic Trends
Discourse
Embeddings
Syntax
Task
Machine Translation
Approach
Deep Learning
Generative Model
Language
English
Similar Papers
Cross Language Dependency Parsing using a Bilingual Lexicon
Hai Zhao
|
Yan Song
|
Chunyu Kit
|
Guodong Zhou
|
Integrating Graph-Based and Transition-Based Dependency Parsers
Joakim Nivre
|
Ryan McDonald
|
A treebank-based study on the influence of Italian word order on parsing performance
Anita Alicante
|
Cristina Bosco
|
Anna Corazza
|
Alberto Lavelli
|
Extending Statistical Machine Translation with Discriminative and Trigger-Based Lexicon Models
Arne Mauser
|
Saša Hasan
|
Hermann Ney
|
Improving Arabic-Chinese Statistical Machine Translation using English as Pivot Language
Nizar Habash
|
Jun Hu
|