NLPExplorer
Papers
Venues
Authors
Authors Timeline
Field of Study
URLs
ACL N-gram Stats
TweeNLP
API
Team
Multimodal Semantic Learning from Child-Directed Input
Angeliki Lazaridou
|
Grzegorz Chrupała
|
Raquel Fernández
|
Marco Baroni
|
Paper Details:
Month: June
Year: 2016
Location: San Diego, California
Venue:
NAACL |
Citations
URL
From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning
Lieke Gelderloos
|
Grzegorz Chrupała
|
Multimodal Grounding for Language Processing
Lisa Beinborn
|
Teresa Botschen
|
Iryna Gurevych
|
Deriving continous grounded meaning representations from referentially structured multimodal contexts
Sina Zarrieß
|
David Schlangen
|
Representations of language in a model of visually grounded speech signal
Grzegorz Chrupała
|
Lieke Gelderloos
|
Afra Alishahi
|
http://langcog.stanford.edu/materials/
http://www.iclr.cc/doku.php?id=
http://www.dlworkshop.org/
http://arxiv.org/abs/
Field Of Study
Linguistic Trends
Embeddings
Approach
Bayesian Model
Language
Child Language
Dataset
Child Language
Similar Papers
The Pragmatics of Referring and the Modality of Communication
Philip R. Cohen
|
A computational account of comparative implicatures for a spoken dialogue agent
Luciana Benotti
|
David Traum
|
THE INTONATIONAL STRUCTURING OF DISCOURSE
Julia Hirschberg
|
Janet Pierrehumbert
|
Coordination and context-dependence in the generation of embodied conversation
Justine Cassell
|
Matthew Stone
|
Hao Yan
|
Gesture Theory is Linguistics: On Modelling Multimodality as Prosody
Dafydd Gibbon
|