NLPExplorer
Papers
Venues
Authors
Authors Timeline
Field of Study
URLs
ACL N-gram Stats
TweeNLP
API
Team
Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics
Douwe Kiela
|
Léon Bottou
|
Paper Details:
Month: October
Year: 2014
Location: Doha, Qatar
Venue:
EMNLP |
SIG: SIGDAT
Citations
URL
Predicting the Evocation Relation between Lexicalized Concepts
Yoshihiko Hayashi
|
Is an Image Worth More than a Thousand Words? On the Fine-Grain Semantic Differences between Visual and Linguistic Representations
Guillem Collell
|
Marie-Francine Moens
|
Multimodal Grounding for Language Processing
Lisa Beinborn
|
Teresa Botschen
|
Iryna Gurevych
|
Visual Bilingual Lexicon Induction with Transferred ConvNet Features
Douwe Kiela
|
Ivan Vulić
|
Stephen Clark
|
Multi- and Cross-Modal Semantics Beyond Vision: Grounding in Auditory Perception
Douwe Kiela
|
Stephen Clark
|
Comparing Data Sources and Architectures for Deep Visual Representation Learning in Semantics
Douwe Kiela
|
Anita Lilla Verő
|
Stephen Clark
|
Deriving continous grounded meaning representations from referentially structured multimodal contexts
Sina Zarrieß
|
David Schlangen
|
Speaking, Seeing, Understanding: Correlating semantic models with conceptual representation in the brain
Luana Bulat
|
Stephen Clark
|
Ekaterina Shutova
|
Grasping the Finer Point: A Supervised Similarity Network for Metaphor Detection
Marek Rei
|
Luana Bulat
|
Douwe Kiela
|
Ekaterina Shutova
|
Visual Denotations for Recognizing Textual Entailment
Dan Han
|
Pascual Martínez-Gómez
|
Koji Mineshima
|
Associative Multichannel Autoencoder for Multimodal Word Representation
Shaonan Wang
|
Jiajun Zhang
|
Chengqing Zong
|
Dynamic Meta-Embeddings for Improved Sentence Representations
Douwe Kiela
|
Changhan Wang
|
Kyunghyun Cho
|
A Probabilistic Model for Joint Learning of Word Embeddings from Texts and Images
Melissa Ailem
|
Bowen Zhang
|
Aurelien Bellet
|
Pascal Denis
|
Fei Sha
|
Using Sparse Semantic Embeddings Learned from Multimodal Text and Image Data to Model Human Conceptual Knowledge
Steven Derby
|
Paul Miller
|
Brian Murphy
|
Barry Devereux
|
Lessons Learned in Multilingual Grounded Language Learning
Ákos Kádár
|
Desmond Elliott
|
Marc-Alexandre Côté
|
Grzegorz Chrupała
|
Afra Alishahi
|
Combining Language and Vision with a Multimodal Skip-gram Model
Angeliki Lazaridou
|
Nghia The Pham
|
Marco Baroni
|
Black Holes and White Rabbits: Metaphor Identification with Visual Features
Ekaterina Shutova
|
Douwe Kiela
|
Jean Maillard
|
Vision and Feature Norms: Improving automatic feature norm learning through cross-modal maps
Luana Bulat
|
Douwe Kiela
|
Stephen Clark
|
Learning Visually Grounded Sentence Representations
Douwe Kiela
|
Alexis Conneau
|
Allan Jabri
|
Maximilian Nickel
|
Can Network Embedding of Distributional Thesaurus Be Combined with Word Vectors for Better Representation?
Abhik Jana
|
Pawan Goyal
|
Multimodal Frame Identification with Multilingual Evaluation
Teresa Botschen
|
Iryna Gurevych
|
Jan-Christoph Klie
|
Hatem Mousselly-Sergieh
|
Stefan Roth
|
Quantifying the Visual Concreteness of Words and Topics in Multimodal Datasets
Jack Hessel
|
David Mimno
|
Lillian Lee
|
On Using Very Large Target Vocabulary for Neural Machine Translation
Sébastien Jean
|
Kyunghyun Cho
|
Roland Memisevic
|
Yoshua Bengio
|
Exploiting Image Generality for Lexical Entailment Detection
Douwe Kiela
|
Laura Rimell
|
Ivan Vulić
|
Stephen Clark
|
Grounding Semantics in Olfactory Perception
Douwe Kiela
|
Luana Bulat
|
Stephen Clark
|
Learning Concept Taxonomies from Multi-modal Data
Hao Zhang
|
Zhiting Hu
|
Yuntian Deng
|
Mrinmaya Sachan
|
Zhicheng Yan
|
Eric Xing
|
Multi-Modal Representations for Improved Bilingual Lexicon Learning
Ivan Vulić
|
Douwe Kiela
|
Stephen Clark
|
Marie-Francine Moens
|
MMFeat: A Toolkit for Extracting Multi-Modal Features
Douwe Kiela
|
Bridging Languages through Images with Deep Partial Canonical Correlation Analysis
Guy Rotman
|
Ivan Vulić
|
Roi Reichart
|
Illustrative Language Understanding: Large-Scale Visual Grounding with Image Search
Jamie Kiros
|
William Chan
|
Geoffrey Hinton
|
Visually Grounded and Textual Semantic Models Differentially Decode Brain Activity Associated with Concrete and Abstract Nouns
Andrew J. Anderson
|
Douwe Kiela
|
Stephen Clark
|
Massimo Poesio
|
Spectral Graph-Based Method of Multimodal Word Embedding
Kazuki Fukui
|
Takamasa Oshikiri
|
Hidetoshi Shimodaira
|
If Sentences Could See: Investigating Visual Information for Semantic Textual Similarity
Goran Glavaš
|
Ivan Vulić
|
Simone Paolo Ponzetto
|
Incorporating visual features into word embeddings: A bimodal autoencoder-based approach
Mika Hasegawa
|
Tetsunori Kobayashi
|
Yoshihiko Hayashi
|
Limitations of Cross-Lingual Learning from Image Search
Mareike Hartmann
|
Anders Søgaard
|
http://www.di.ens.fr/willow/research/cnn/
http://mattmahoney.net/dc/textdata.html
http://www.cl.cam.ac
http://www.vlfeat.org/
Field Of Study
Linguistic Trends
Distributional Semantics
Embeddings
Task
Tagging
Semantic Similarity
Approach
Deep Learning
Language
English
Similar Papers
Expectation-Regulated Neural Model for Event Mention Extraction
Ching-Yun Chang
|
Zhiyang Teng
|
Yue Zhang
|
Bootstrap Domain-Specific Sentiment Classifiers from Unlabeled Corpora
Andrius Mudinas
|
Dell Zhang
|
Mark Levene
|
A Joint Model of Conversational Discourse Latent Topics on Microblogs
Jing Li
|
Yan Song
|
Zhongyu Wei
|
Kam-Fai Wong
|
A Corpus of Corporate Annual and Social Responsibility Reports: 280 Million Tokens of Balanced Organizational Writing
Sebastian G.M. Händschke
|
Sven Buechel
|
Jan Goldenstein
|
Philipp Poschmann
|
Tinghui Duan
|
Peter Walgenbach
|
Udo Hahn
|
Argumentation Mining in User-Generated Web Discourse
Ivan Habernal
|
Iryna Gurevych
|