NLPExplorer
Papers
Venues
Authors
Authors Timeline
Field of Study
URLs
ACL N-gram Stats
TweeNLP
API
Team
Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks
Rion Snow
|
Brendan O’Connor
|
Daniel Jurafsky
|
Andrew Ng
|
Paper Details:
Month: October
Year: 2008
Location: Honolulu, Hawaii
Venue:
EMNLP |
SIG: SIGDAT
Citations
URL
Text Mining for Automatic Image Tagging
Chee Wee Leong
|
Rada Mihalcea
|
Samer Hassan
|
MT Error Detection for Cross-Lingual Question Answering
Kristen Parton
|
Kathleen McKeown
|
“Expresses-an-opinion-about”: using corpus statistics in an information extraction approach to opinion mining
Asad B. Sayeed
|
Hieu C. Nguyen
|
Timothy J. Meyer
|
Amy Weinberg
|
Rapid Development of a Corpus with Discourse Annotations using Two-stage Crowdsourcing
Daisuke Kawahara
|
Yuichiro Machida
|
Tomohide Shibata
|
Sadao Kurohashi
|
Hayato Kobayashi
|
Manabu Sassano
|
Empirical Analysis of Aggregation Methods for Collective Annotation
Ciyang Qing
|
Ulle Endriss
|
Raquel Fernández
|
Justin Kruger
|
Why Gender and Age Prediction from Tweets is Hard: Lessons from a Crowdsourcing Experiment
Dong Nguyen
|
Dolf Trieschnigg
|
A. Seza Doğruöz
|
Rilana Gravel
|
Mariët Theune
|
Theo Meder
|
Franciska de Jong
|
Semantic Annotation Aggregation with Conditional Crowdsourcing Models and Word Embeddings
Paul Felt
|
Eric Ringger
|
Kevin Seppi
|
Crowdsourcing Complex Language Resources: Playing to Annotate Dependency Syntax
Bruno Guillaume
|
Karën Fort
|
Nicolas Lefebvre
|
Sequence-to-Sequence Data Augmentation for Dialogue Language Understanding
Yutai Hou
|
Yijia Liu
|
Wanxiang Che
|
Ting Liu
|
Learning from Measurements in Crowdsourcing Models: Inferring Ground Truth from Diverse Annotation Types
Paul Felt
|
Eric Ringger
|
Jordan Boyd-Graber
|
Kevin Seppi
|
AnlamVer: Semantic Model Evaluation Dataset for Turkish - Word Similarity and Relatedness
Gökhan Ercan
|
Olcay Taner Yıldız
|
Feasibility of Human-in-the-loop Minimum Error Rate Training
Omar F. Zaidan
|
Chris Callison-Burch
|
Fast, Cheap, and Creative: Evaluating Translation Quality Using Amazon’s Mechanical Turk
Chris Callison-Burch
|
How well does active learning actually work? Time-based evaluation of cost-reduction strategies for language documentation.
Jason Baldridge
|
Alexis Palmer
|
Incorporating Content Structure into Text Analysis Applications
Christina Sauper
|
Aria Haghighi
|
Regina Barzilay
|
NLP on Spoken Documents Without ASR
Mark Dredze
|
Aren Jansen
|
Glen Coppersmith
|
Ken Church
|
Data-Driven Response Generation in Social Media
Alan Ritter
|
Colin Cherry
|
William B. Dolan
|
Divide and Conquer: Crowdsourcing the Creation of Cross-Lingual Textual Entailment Corpora
Matteo Negri
|
Luisa Bentivogli
|
Yashar Mehdad
|
Danilo Giampiccolo
|
Alessandro Marchetti
|
Active Learning with Amazon Mechanical Turk
Florian Laws
|
Christian Scheible
|
Hinrich Schütze
|
Lyrics, Music, and Emotions
Rada Mihalcea
|
Carlo Strapparava
|
Unsupervised Induction of Contingent Event Pairs from Film Scenes
Zhichao Hu
|
Elahe Rahimtoroghi
|
Larissa Munishkina
|
Reid Swanson
|
Marilyn A. Walker
|
Understanding and Quantifying Creativity in Lexical Composition
Polina Kuznetsova
|
Jianfu Chen
|
Yejin Choi
|
Generating Coherent Event Schemas at Scale
Niranjan Balasubramanian
|
Stephen Soderland
|
Mausam
|
Oren Etzioni
|
Exploring Demographic Language Variations to Improve Multilingual Sentiment Analysis in Social Media
Svitlana Volkova
|
Theresa Wilson
|
David Yarowsky
|
Joint Emotion Analysis via Multi-task Gaussian Processes
Daniel Beck
|
Trevor Cohn
|
Lucia Specia
|
Major Life Event Extraction from Twitter based on Congratulations/Condolences Speech Acts
Jiwei Li
|
Alan Ritter
|
Claire Cardie
|
Eduard Hovy
|
Noise or additional information? Leveraging crowdsource annotation item agreement for natural language tasks.
Emily Jamison
|
Iryna Gurevych
|
Event Detection and Factuality Assessment with Non-Expert Supervision
Kenton Lee
|
Yoav Artzi
|
Yejin Choi
|
Luke Zettlemoyer
|
Estimation of Discourse Segmentation Labels from Crowd Data
Ziheng Huang
|
Jialu Zhong
|
Rebecca J. Passonneau
|
Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback
Khanh Nguyen
|
Hal Daumé III
|
Jordan Boyd-Graber
|
Finding Patterns in Noisy Crowds: Regression-based Annotation Aggregation for Crowdsourced Data
Natalie Parde
|
Rodney Nielsen
|
CROWD-IN-THE-LOOP: A Hybrid Approach for Annotating Semantic Roles
Chenguang Wang
|
Alan Akbik
|
Laura Chiticariu
|
Yunyao Li
|
Fei Xia
|
Anbang Xu
|
Sequence Effects in Crowdsourced Annotations
Nitika Mathur
|
Timothy Baldwin
|
Trevor Cohn
|
Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps
Tobias Falke
|
Iryna Gurevych
|
Weeding out Conventionalized Metaphors: A Corpus of Novel Metaphor Annotations
Erik-Lân Do Dinh
|
Hannah Wieland
|
Iryna Gurevych
|
A Probabilistic Annotation Model for Crowdsourcing Coreference
Silviu Paun
|
Jon Chamberlain
|
Udo Kruschwitz
|
Juntao Yu
|
Massimo Poesio
|
Interpretation of Natural Language Rules in Conversational Machine Reading
Marzieh Saeidi
|
Max Bartolo
|
Patrick Lewis
|
Sameer Singh
|
Tim Rocktäschel
|
Mike Sheldon
|
Guillaume Bouchard
|
Sebastian Riedel
|
CLex: A Lexicon for Exploring Color, Concept and Emotion Associations in Language
Svitlana Volkova
|
William B. Dolan
|
Theresa Wilson
|
Discriminating Rhetorical Analogies in Social Media
Christoph Lofi
|
Christian Nieke
|
Nigel Collier
|
A Graph-Based Approach to String Regeneration
Matic Horvat
|
William Byrne
|
Automatic Extraction of News Values from Headline Text
Alicja Piotrkowicz
|
Vania Dimitrova
|
Katja Markert
|
Lexicon-Based Methods for Sentiment Analysis
Maite Taboada
|
Julian Brooke
|
Milan Tofiloski
|
Kimberly Voll
|
Manfred Stede
|
Last Words: Amazon Mechanical Turk: Gold Mine or Coal Mine?
Karën Fort
|
Gilles Adda
|
K. Bretonnel Cohen
|
What Determines Inter-Coder Agreement in Manual Annotations? A Meta-Analytic Investigation
Petra Saskia Bayerl
|
Karsten Ingmar Paul
|
Did It Happen? The Pragmatic Complexity of Veridicality Assessment
Marie-Catherine de Marneffe
|
Christopher D. Manning
|
Christopher Potts
|
Last Words: On the Problem of Theoretical Terms in Empirical Computational Linguistics
Stefan Riezler
|
Making the Most of Crowdsourced Document Annotations: Confused Supervised LDA
Paul Felt
|
Eric Ringger
|
Jordan Boyd-Graber
|
Kevin Seppi
|
Getting Reliable Annotations for Sarcasm in Online Dialogues
Reid Swanson
|
Stephanie Lukin
|
Luke Eisenberg
|
Thomas Corcoran
|
Marilyn Walker
|
Augmenting English Adjective Senses with Supersenses
Yulia Tsvetkov
|
Nathan Schneider
|
Dirk Hovy
|
Archna Bhatia
|
Manaal Faruqui
|
Chris Dyer
|
Momresp: A Bayesian Model for Multi-Annotator Document Labeling
Paul Felt
|
Robbie Haertel
|
Eric Ringger
|
Kevin Seppi
|
Crowdsourcing for the identification of event nominals: an experiment
Rachele Sprugnoli
|
Alessandro Lenci
|
Can the Crowd be Controlled?: A Case Study on Crowd Sourcing and Automatic Validation of Completed Tasks based on User Modeling
Balamurali A.R
|
A SICK cure for the evaluation of compositional distributional semantic models
Marco Marelli
|
Stefano Menini
|
Marco Baroni
|
Luisa Bentivogli
|
Raffaella Bernardi
|
Roberto Zamparelli
|
Can Crowdsourcing be used for Effective Annotation of Arabic?
Wajdi Zaghouani
|
Kais Dukes
|
Designing and Evaluating a Reliable Corpus of Web Genres via Crowd-Sourcing
Noushin Rezapour Asheghi
|
Serge Sharoff
|
Katja Markert
|
Crowdsourcing as a preprocessing for complex semantic annotation tasks
Héctor Martínez Alonso
|
Lauren Romeo
|
Corpus Annotation through Crowdsourcing: Towards Best Practice Guidelines
Marta Sabou
|
Kalina Bontcheva
|
Leon Derczynski
|
Arno Scharl
|
When Transliteration Met Crowdsourcing : An Empirical Study of Transliteration via Crowdsourcing using Efficient, Non-redundant and Fair Quality Control
Mitesh M. Khapra
|
Ananthakrishnan Ramanathan
|
Anoop Kunchukuttan
|
Karthik Visweswariah
|
Pushpak Bhattacharyya
|
Towards a Corpus of Violence Acts in Arabic Social Media
Ayman Alhelbawy
|
Poesio Massimo
|
Udo Kruschwitz
|
Phrase Detectives Corpus 1.0 Crowdsourced Anaphoric Coreference.
Jon Chamberlain
|
Massimo Poesio
|
Udo Kruschwitz
|
Crowdsourcing a Large Dataset of Domain-Specific Context-Sensitive Semantic Verb Relations
Maria Sukhareva
|
Judith Eckle-Kohler
|
Ivan Habernal
|
Iryna Gurevych
|
Crowdsourcing a Multi-lingual Speech Corpus: Recording, Transcription and Annotation of the CrowdIS Corpora
Andrew Caines
|
Christian Bentz
|
Calbert Graham
|
Tim Polzehl
|
Paula Buttery
|
Crowdsourcing Ontology Lexicons
Bettina Lanser
|
Christina Unger
|
Philipp Cimiano
|
Focus Annotation of Task-based Data: A Comparison of Expert and Crowd-Sourced Annotation in a Reading Comprehension Corpus
Kordula De Kuthy
|
Ramon Ziai
|
Detmar Meurers
|
SLIDE - a Sentiment Lexicon of Common Idioms
Charles Jochim
|
Francesca Bonin
|
Roy Bar-Haim
|
Noam Slonim
|
Improving Crowdsourcing-Based Annotation of Japanese Discourse Relations
Yudai Kishimoto
|
Shinnosuke Sawada
|
Yugo Murawaki
|
Daisuke Kawahara
|
Sadao Kurohashi
|
For a few dollars less: Identifying review pages sans human labels
Luciano Barbosa
|
Ravi Kumar
|
Bo Pang
|
Andrew Tomkins
|
Multi-Prototype Vector-Space Models of Word Meaning
Joseph Reisinger
|
Raymond J. Mooney
|
Cheap, Fast and Good Enough: Automatic Speech Recognition with Non-Expert Transcription
Scott Novotney
|
Chris Callison-Burch
|
Time-Efficient Creation of an Accurate Sentence Fusion Corpus
Kathleen McKeown
|
Sara Rosenthal
|
Kapil Thadani
|
Coleman Moore
|
Crowdsourcing the evaluation of a domain-adapted named entity recognition system
Asad B. Sayeed
|
Timothy J. Meyer
|
Hieu C. Nguyen
|
Olivia Buzek
|
Amy Weinberg
|
Predicting Human-Targeted Translation Edit Rate via Untrained Human Annotators
Omar F. Zaidan
|
Chris Callison-Burch
|
Some Empirical Evidence for Annotation Noise in a Benchmarked Dataset
Beata Beigman Klebanov
|
Eyal Beigman
|
The Best Lexical Metric for Phrase-Based Statistical MT System Optimization
Daniel Cer
|
Christopher D. Manning
|
Daniel Jurafsky
|
Cross-lingual Induction of Selectional Preferences with Bilingual Vector Spaces
Yves Peirsman
|
Sebastian Padó
|
Distinguishing Use and Mention in Natural Language
Shomir Wilson
|
Expectations of Word Sense in Parallel Corpora
Xuchen Yao
|
Benjamin Van Durme
|
Chris Callison-Burch
|
Mind the Gap: Learning to Choose Gaps for Question Generation
Lee Becker
|
Sumit Basu
|
Lucy Vanderwende
|
Embracing Ambiguity: A Comparison of Annotation Methodologies for Crowdsourcing Word Sense Labels
David Jurgens
|
An opinion about opinions about opinions: subjectivity and the aggregate reader
Asad Sayeed
|
Learning Whom to Trust with MACE
Dirk Hovy
|
Taylor Berg-Kirkpatrick
|
Ashish Vaswani
|
Eduard Hovy
|
Corpus-based discovery of semantic intensity scales
Chaitanya Shivade
|
Marie-Catherine de Marneffe
|
Eric Fosler-Lussier
|
Albert M. Lai
|
Cost Optimization in Crowdsourcing Translation: Low cost translations made even cheaper
Mingkun Gao
|
Wei Xu
|
Chris Callison-Burch
|
Testing and Comparing Computational Approaches for Identifying the Language of Framing in Political News
Eric Baumer
|
Elisha Elovic
|
Ying Qin
|
Francesca Polletta
|
Geri Gay
|
Effective Crowd Annotation for Relation Extraction
Angli Liu
|
Stephen Soderland
|
Jonathan Bragg
|
Christopher H. Lin
|
Xiao Ling
|
Daniel S. Weld
|
The Importance of Calibration for Estimating Proportions from Annotations
Dallas Card
|
Noah A. Smith
|
Estimating Summary Quality with Pairwise Preferences
Markus Zopf
|
Effective Crowdsourcing for a New Type of Summarization Task
Youxuan Jiang
|
Catherine Finegan-Dollak
|
Jonathan K. Kummerfeld
|
Walter Lasecki
|
Data Collection for Dialogue System: A Startup Perspective
Yiping Kang
|
Yunqi Zhang
|
Jonathan K. Kummerfeld
|
Lingjia Tang
|
Jason Mars
|
Learning with Annotation Noise
Eyal Beigman
|
Beata Beigman Klebanov
|
Distant supervision for relation extraction without labeled data
Mike Mintz
|
Steven Bills
|
Rion Snow
|
Daniel Jurafsky
|
The Lie Detector: Explorations in the Automatic Recognition of Deceptive Language
Rada Mihalcea
|
Carlo Strapparava
|
“Was It Good? It Was Provocative.” Learning the Meaning of Scalar Adjectives
Marie-Catherine de Marneffe
|
Christopher D. Manning
|
Christopher Potts
|
Bucking the Trend: Large-Scale Cost-Focused Active Learning for Statistical Machine Translation
Michael Bloodgood
|
Chris Callison-Burch
|
Learning Script Knowledge with Web Experiments
Michaela Regneri
|
Alexander Koller
|
Manfred Pinkal
|
Interactive Topic Modeling
Yuening Hu
|
Jordan Boyd-Graber
|
Brianna Satinoff
|
Reordering Metrics for MT
Alexandra Birch
|
Miles Osborne
|
Crowdsourcing Translation: Professional Quality from Non-Professionals
Omar F. Zaidan
|
Chris Callison-Burch
|
Automatic Labelling of Topic Models
Jey Han Lau
|
Karl Grieser
|
David Newman
|
Timothy Baldwin
|
Modeling Wisdom of Crowds Using Latent Mixture of Discriminative Experts
Derya Ozkan
|
Louis-Philippe Morency
|
Fast Online Lexicon Learning for Grounded Language Acquisition
David Chen
|
Improving Word Representations via Global Context and Multiple Word Prototypes
Eric Huang
|
Richard Socher
|
Christopher Manning
|
Andrew Ng
|
Ecological Evaluation of Persuasive Messages Using Google AdWords
Marco Guerini
|
Carlo Strapparava
|
Oliviero Stock
|
Crowdsourcing Inference-Rule Evaluation
Naomi Zeichner
|
Jonathan Berant
|
Ido Dagan
|
How Are Spelling Errors Generated and Corrected? A Study of Corrected and Uncorrected Spelling Errors Using Keystroke Logs
Yukino Baba
|
Hisami Suzuki
|
Modelling Annotator Bias with Multi-task Gaussian Processes: An Application to Machine Translation Quality Estimation
Trevor Cohn
|
Lucia Specia
|
Collective Annotation of Linguistic Resources: Basic Principles and a Formal Model
Ulle Endriss
|
Raquel Fernández
|
Crowd Prefers the Middle Path: A New IAA Metric for Crowdsourcing Reveals Turker Biases in Query Segmentation
Rohan Ramanath
|
Monojit Choudhury
|
Kalika Bali
|
Rishiraj Saha Roy
|
Exploring Sentiment in Social Media: Bootstrapping Subjectivity Clues from Multilingual Twitter Streams
Svitlana Volkova
|
Theresa Wilson
|
David Yarowsky
|
Annotation of regular polysemy and underspecification
Héctor Martínez Alonso
|
Bolette Sandford Pedersen
|
Núria Bel
|
Outsourcing FrameNet to the Crowd
Marco Fossati
|
Claudio Giuliano
|
Sara Tonelli
|
TransDoop: A Map-Reduce based Crowdsourced Translation for Complex Domain
Anoop Kunchukuttan
|
Rajen Chatterjee
|
Shourya Roy
|
Abhijit Mishra
|
Pushpak Bhattacharyya
|
Active Learning with Efficient Feature Weighting Methods for Improving Data Quality and Classification Accuracy
Justin Martineau
|
Lu Chen
|
Doreen Cheng
|
Amit Sheth
|
Are Two Heads Better than One? Crowdsourced Translation via a Two-Step Collaboration of Non-Professional Translators and Editors
Rui Yan
|
Mingkun Gao
|
Ellie Pavlick
|
Chris Callison-Burch
|
Improving the Recognizability of Syntactic Relations Using Contextualized Examples
Aditi Muralidharan
|
Marti A. Hearst
|
Experiments with crowdsourced re-annotation of a POS tagging data set
Dirk Hovy
|
Barbara Plank
|
Anders Søgaard
|
Difficult Cases: From Data to Learning, and Back
Beata Beigman Klebanov
|
Eyal Beigman
|
Modeling Factuality Judgments in Social Media Text
Sandeep Soni
|
Tanushree Mitra
|
Eric Gilbert
|
Jacob Eisenstein
|
Depeche Mood: a Lexicon for Emotion Analysis from Crowd Annotated News
Jacopo Staiano
|
Marco Guerini
|
Improving social relationships in face-to-face human-agent interactions: when the agent wants to know user’s likes and dislikes
Caroline Langlet
|
Chloé Clavel
|
TR9856: A Multi-word Term Relatedness Benchmark
Ran Levy
|
Liat Ein-Dor
|
Shay Hummel
|
Ruty Rinott
|
Noam Slonim
|
ALTO: Active Learning with Topic Overviews for Speeding Label Induction and Document Labeling
Forough Poursabzi-Sangdeh
|
Jordan Boyd-Graber
|
Leah Findlater
|
Kevin Seppi
|
Aggregating and Predicting Sequence Labels from Crowd Annotations
An Thanh Nguyen
|
Byron Wallace
|
Junyi Jessy Li
|
Ani Nenkova
|
Matthew Lease
|
Detecting annotation noise in automatically labelled data
Ines Rehbein
|
Josef Ruppenhofer
|
TALEN: Tool for Annotation of Low-resource ENtities
Stephen Mayhew
|
Dan Roth
|
The Language Demographics of Amazon Mechanical Turk
Ellie Pavlick
|
Matt Post
|
Ann Irvine
|
Dmitry Kachaev
|
Chris Callison-Burch
|
TreeTalk: Composition and Compression of Trees for Image Descriptions
Polina Kuznetsova
|
Vicente Ordonez
|
Tamara L. Berg
|
Yejin Choi
|
It’s All Fun and Games until Someone Annotates: Video Games with a Purpose for Linguistic Annotation
David Jurgens
|
Roberto Navigli
|
Learning to Make Inferences in a Semantic Parsing Task
Kyle Richardson
|
Jonas Kuhn
|
Replicability Analysis for Natural Language Processing: Testing Significance with Multiple Datasets
Rotem Dror
|
Gili Baumer
|
Marina Bogomolov
|
Roi Reichart
|
Comparing Bayesian Models of Annotation
Silviu Paun
|
Bob Carpenter
|
Jon Chamberlain
|
Dirk Hovy
|
Udo Kruschwitz
|
Massimo Poesio
|
Collecting and Evaluating Lexical Polarity with A Game With a Purpose
Mathieu Lafourcade
|
Alain Joubert
|
Nathalie Le Brun
|
SemEval-2 Task 9: The Interpretation of Noun Compounds Using Paraphrasing Verbs and Prepositions
Cristina Butnariu
|
Su Nam Kim
|
Preslav Nakov
|
Diarmuid Ó Séaghdha
|
Stan Szpakowicz
|
Tony Veale
|
SemEval-2012 Task 2: Measuring Degrees of Relational Similarity
David Jurgens
|
Saif Mohammad
|
Peter Turney
|
Keith Holyoak
|
Taking the best from the Crowd:Learning Question Passage Classification from Noisy Data
Azad Abad
|
Alessandro Moschitti
|
Digital Operatives at SemEval-2018 Task 8: Using dependency features for malware NLP
Chris Brew
|
Towards Efficient Machine Translation Evaluation by Modelling Annotators
Nitika Mathur
|
Timothy Baldwin
|
Trevor Cohn
|
Data Quality from Crowdsourcing: A Study of Annotation Selection Criteria
Pei-Yun Hsueh
|
Prem Melville
|
Vikas Sindhwani
|
Proactive Learning for Building Machine Translation Systems for Minority Languages
Vamshi Ambati
|
Jaime Carbonell
|
SemEval-2010 Task 9: The Interpretation of Noun Compounds Using Paraphrasing Verbs and Prepositions
Cristina Butnariu
|
Su Nam Kim
|
Preslav Nakov
|
Diarmuid Ó Séaghdha
|
Stan Szpakowicz
|
Tony Veale
|
Emotions Evoked by Common Words and Phrases: Using Mechanical Turk to Create an Emotion Lexicon
Saif Mohammad
|
Peter Turney
|
Identifying Emotions, Intentions, and Attitudes in Text Using a Game with a Purpose
Lisa Pearl
|
Mark Steyvers
|
Creating Speech and Language Data With Amazon’s Mechanical Turk
Chris Callison-Burch
|
Mark Dredze
|
Clustering dictionary definitions using Amazon Mechanical Turk
Gabriel Parent
|
Maxine Eskenazi
|
Rating Computer-Generated Questions with Mechanical Turk
Michael Heilman
|
Noah A. Smith
|
Crowdsourced Accessibility: Elicitation of Wikipedia Articles
Scott Novotney
|
Chris Callison-Burch
|
Using Amazon Mechanical Turk for Transcription of Non-Native Speech
Keelan Evanini
|
Derrick Higgins
|
Klaus Zechner
|
Can Crowds Build parallel corpora for Machine Translation Systems?
Vamshi Ambati
|
Stephan Vogel
|
Annotating Large Email Datasets for Named Entity Recognition with Mechanical Turk
Nolan Lawson
|
Kevin Eustice
|
Mike Perkowitz
|
Meliha Yetisgen-Yildiz
|
Annotating Named Entities in Twitter Data with Crowdsourcing
Tim Finin
|
William Murnane
|
Anand Karandikar
|
Nicholas Keller
|
Justin Martineau
|
Mark Dredze
|
MTurk Crowdsourcing: A Viable Method for Rapid Discovery of Arabic Nicknames?
Chiara Higgins
|
Elizabeth McGrath
|
Laila Moretto
|
Using Mechanical Turk to Annotate Lexicons for Less Commonly Used Languages
Ann Irvine
|
Alexandre Klementiev
|
Opinion Mining of Spanish Customer Comments with Non-Expert Annotations on Mechanical Turk
Bart Mellebeek
|
Francesc Benavent
|
Jens Grivolla
|
Joan Codina
|
Marta R. Costa-jussà
|
Rafael Banchs
|
Crowdsourcing and language studies: the new generation of linguistic data
Robert Munro
|
Steven Bethard
|
Victor Kuperman
|
Vicky Tzuyin Lai
|
Robin Melnick
|
Christopher Potts
|
Tyler Schnoebelen
|
Harry Tily
|
Collecting Image Annotations Using Amazon’s Mechanical Turk
Cyrus Rashtchian
|
Peter Young
|
Micah Hodosh
|
Julia Hockenmaier
|
Non-Expert Evaluation of Summarization Systems is Risky
Dan Gillick
|
Yang Liu
|
Evaluation of Commonsense Knowledge with Mechanical Turk
Jonathan Gordon
|
Benjamin Van Durme
|
Lenhart Schubert
|
Cheap Facts and Counter-Facts
Rui Wang
|
Chris Callison-Burch
|
Amazon Mechanical Turk for Subjectivity Word Sense Disambiguation
Cem Akkaya
|
Alexander Conrad
|
Janyce Wiebe
|
Rada Mihalcea
|
Non-Expert Correction of Automatically Generated Relation Annotations
Matthew R. Gormley
|
Adam Gerber
|
Mary Harper
|
Mark Dredze
|
Using Mechanical Turk to Build Machine Translation Evaluation Sets
Michael Bloodgood
|
Chris Callison-Burch
|
Creating a Bi-lingual Entailment Corpus through Translations with Mechanical Turk: $100 for a 10-day Rush
Matteo Negri
|
Yashar Mehdad
|
Rethinking Grammatical Error Annotation and Evaluation with the Amazon Mechanical Turk
Joel Tetreault
|
Elena Filatova
|
Martin Chodorow
|
Anveshan: A Framework for Analysis of Multiple Annotators’ Labeling Behavior
Vikas Bhardwaj
|
Rebecca Passonneau
|
Ansaf Salleb-Aouissi
|
Nancy Ide
|
PackPlay: Mining Semantic Data in Collaborative Games
Nathan Green
|
Paul Breimyer
|
Vinay Kumar
|
Nagiza Samatova
|
No Sentence Is Too Confusing To Ignore
Paul Cook
|
Suzanne Stevenson
|
Expanding textual entailment corpora fromWikipedia using co-training
Fabio Massimo Zanzotto
|
Marco Pennacchiotti
|
Pruning Non-Informative Text Through Non-Expert Annotations to Improve Aspect-Level Sentiment Classification
Ji Fang
|
Bob Price
|
Lotti Price
|
Using Query Patterns to Learn the Duration of Events
Andrey Gusev
|
Nathanael Chambers
|
Divye Raj Khilnani
|
Pranav Khaitan
|
Steven Bethard
|
Dan Jurafsky
|
How Good is the Crowd at “real” WSD?
Jisup Hong
|
Collin F. Baker
|
Reducing the Need for Double Annotation
Dmitriy Dligach
|
Martha Palmer
|
Crowdsourcing Word Sense Definition
Anna Rumshisky
|
Colourful Language: Measuring Word-Colour Associations
Saif Mohammad
|
How can you say such things?!?: Recognizing Disagreement in Informal Political Argument
Rob Abbott
|
Marilyn Walker
|
Pranav Anand
|
Jean E. Fox Tree
|
Robeson Bowmani
|
Joseph King
|
Paraphrase Fragment Extraction from Monolingual Comparable Corpora
Rui Wang
|
Chris Callison-Burch
|
Readability Annotation: Replacing the Expert by the Crowd
Philip van Oosten
|
Véronique Hoste
|
Crowdsourcing syntactic relatedness judgements for opinion mining in the study of information technology adoption
Asad B. Sayeed
|
Bryan Rusk
|
Martin Petrov
|
Hieu C. Nguyen
|
Timothy J. Meyer
|
Amy Weinberg
|
Towards Strict Sentence Intersection: Decoding and Evaluation Strategies
Kapil Thadani
|
Kathleen McKeown
|
Evaluating unsupervised learning for natural language processing tasks
Andreas Vlachos
|
Language Identification for Creating Language-Specific Twitter Collections
Shane Bergsma
|
Paul McNamee
|
Mossaab Bagdouri
|
Clayton Fink
|
Theresa Wilson
|
Digitizing 18th-Century French Literature: Comparing transcription methods for a critical edition text
Ann Irvine
|
Laure Marcellesi
|
Afra Zomorodian
|
Really? Well. Apparently Bootstrapping Improves the Performance of Sarcasm and Nastiness Classifiers for Online Dialogue
Stephanie Lukin
|
Marilyn Walker
|
Fine-Grained Emotion Recognition in Olympic Tweets Based on Human Computation
Valentina Sintsova
|
Claudiu Musat
|
Pearl Pu
|
Continuous Measurement Scales in Human Evaluation of Machine Translation
Yvette Graham
|
Timothy Baldwin
|
Alistair Moffat
|
Justin Zobel
|
A Framework for (Under)specifying Dependency Syntax without Overloading Annotators
Nathan Schneider
|
Brendan O’Connor
|
Naomi Saphra
|
David Bamman
|
Manaal Faruqui
|
Noah A. Smith
|
Chris Dyer
|
Jason Baldridge
|
Annotating Anaphoric Shell Nouns with their Antecedents
Varada Kolhatkar
|
Heike Zinsmeister
|
Graeme Hirst
|
The Benefits of a Model of Annotation
Rebecca J. Passonneau
|
Bob Carpenter
|
Ranking the annotators: An agreement study on argumentation structure
Andreas Peldszus
|
Manfred Stede
|
Gathering and Generating Paraphrases from Twitter with Application to Normalization
Wei Xu
|
Alan Ritter
|
Ralph Grishman
|
Better Word Representations with Recursive Neural Networks for Morphology
Thang Luong
|
Richard Socher
|
Christopher Manning
|
Analyzing Argumentative Discourse Units in Online Interactions
Debanjan Ghosh
|
Smaranda Muresan
|
Nina Wacholder
|
Mark Aakhus
|
Matthew Mitsui
|
Exploring Mental Lexicon in an Efficient and Economic Way: Crowdsourcing Method for Linguistic Experiments
Shichang Wang
|
Chu-Ren Huang
|
Yao Yao
|
Angel Chan
|
Multiple views as aid to linguistic annotation error analysis
Marilena Di Bari
|
Serge Sharoff
|
Martin Thomas
|
Building a Semantic Transparency Dataset of Chinese Nominal Compounds: A Practice of Crowdsourcing Methodology
Shichang Wang
|
Chu-Ren Huang
|
Yao Yao
|
Angel Chan
|
And That’s A Fact: Distinguishing Factual and Emotional Argumentation in Online Dialogue
Shereen Oraby
|
Lena Reed
|
Ryan Compton
|
Ellen Riloff
|
Marilyn Walker
|
Steve Whittaker
|
Oracle and Human Baselines for Native Language Identification
Shervin Malmasi
|
Joel Tetreault
|
Mark Dras
|
Muddying The Multiword Expression Waters: How Cognitive Demand Affects Multiword Expression Production
Adam Goodkind
|
Andrew Rosenberg
|
Scaling Semantic Frame Annotation
Nancy Chang
|
Praveen Paritosh
|
David Huynh
|
Collin Baker
|
What I’ve learned about annotating informal text (and why you shouldn’t take my word for it)
Nathan Schneider
|
Effectively Crowdsourcing Radiology Report Annotations
Anne Cocos
|
Aaron Masino
|
Ting Qian
|
Ellie Pavlick
|
Chris Callison-Burch
|
Create a Manual Chinese Word Segmentation Dataset Using Crowdsourcing Method
Shichang Wang
|
Chu-Ren Huang
|
Yao Yao
|
Angel Chan
|
MultiLing 2015: Multilingual Summarization of Single and Multi-Documents, On-line Fora, and Call-center Conversations
George Giannakopoulos
|
Jeff Kubina
|
John Conroy
|
Josef Steinberger
|
Benoit Favre
|
Mijail Kabadjov
|
Udo Kruschwitz
|
Massimo Poesio
|
Evaluation of Crowdsourced User Input Data for Spoken Dialog Systems
Maria Schmidt
|
Markus Müller
|
Martin Wagner
|
Sebastian Stüker
|
Alex Waibel
|
Hansjörg Hofmann
|
Steffen Werner
|
Spoken Text Difficulty Estimation Using Linguistic Features
Su-Youn Yoon
|
Yeonsuk Cho
|
Diane Napolitano
|
Constructing a Dictionary Describing Feature Changes of Arguments in Event Sentences
Tetsuaki Nakamura
|
Daisuke Kawahara
|
Design of Word Association Games using Dialog Systems for Acquisition of Word Association Knowledge
Yuichiro Machida
|
Daisuke Kawahara
|
Sadao Kurohashi
|
Manabu Sassano
|
Comparison of Annotating Methods for Named Entity Corpora
Kanako Komiya
|
Masaya Suzuki
|
Tomoya Iwakura
|
Minoru Sasaki
|
Hiroyuki Shinnou
|
Focus Annotation of Task-based Data: Establishing the Quality of Crowd Annotation
Kordula De Kuthy
|
Ramon Ziai
|
Detmar Meurers
|
Similarity-Based Alignment of Monolingual Corpora for Text Simplification Purposes
Sarah Albertsson
|
Evelina Rennes
|
Arne Jönsson
|
Infusing NLU into Automatic Question Generation
Karen Mazidi
|
Paul Tarau
|
Crowdsourcing discourse interpretations: On the influence of context and the reliability of a connective insertion task
Merel Scholman
|
Vera Demberg
|
Proactive Learning for Named Entity Recognition
Maolin Li
|
Nhung Nguyen
|
Sophia Ananiadou
|
Collecting fluency corrections for spoken learner English
Andrew Caines
|
Emma Flint
|
Paula Buttery
|
Evaluating Natural Language Understanding Services for Conversational Question Answering Systems
Daniel Braun
|
Adrian Hernandez Mendez
|
Florian Matthes
|
Manfred Langen
|
Expert, Crowdsourced, and Machine Assessment of Suicide Risk via Online Postings
Han-Chin Shing
|
Suraj Nair
|
Ayah Zirikly
|
Meir Friedenberg
|
Hal Daumé III
|
Philip Resnik
|
Causality Analysis of Twitter Sentiments and Stock Market Returns
Narges Tabari
|
Piyusha Biswas
|
Bhanu Praneeth
|
Armin Seyeditabari
|
Mirsad Hadzikadic
|
Wlodek Zadrozny
|
Crowdsourcing StoryLines: Harnessing the Crowd for Causal Relation Annotation
Tommaso Caselli
|
Oana Inel
|
Creating a Dataset for Multilingual Fine-grained Emotion-detection Using Gamification-based Annotation
Emily Öhman
|
Kaisla Kajava
|
Jörg Tiedemann
|
Timo Honkela
|
Needle in a Haystack: Reducing the Costs of Annotating Rare-Class Instances in Imbalanced Datasets
Emily Jamison
|
Iryna Gurevych
|
Mechanical Turk-based Experiment vs Laboratory-based Experiment: A Case Study on the Comparison of Semantic Transparency Rating Data
Shichang Wang
|
Chu-Ren Huang
|
Yao Yao
|
Angel Chan
|
http://mturk.com
http://blog.doloreslabs.com/?p=109
http://blog.doloreslabs.com/topics/wisdom/
http://ai.stanford.edu/
http://vision.cs.uiuc.edu/annotation/
Field Of Study
Task
Word Sense Disambiguation
Textual Entailment
Question Answering
Language
English
Similar Papers
The Pragmatics of Referring and the Modality of Communication
Philip R. Cohen
|
A computational account of comparative implicatures for a spoken dialogue agent
Luciana Benotti
|
David Traum
|
THE INTONATIONAL STRUCTURING OF DISCOURSE
Julia Hirschberg
|
Janet Pierrehumbert
|
Coordination and context-dependence in the generation of embodied conversation
Justine Cassell
|
Matthew Stone
|
Hao Yan
|
Gesture Theory is Linguistics: On Modelling Multimodality as Prosody
Dafydd Gibbon
|