NLPExplorer
Papers
Venues
Authors
Authors Timeline
Field of Study
URLs
ACL N-gram Stats
TweeNLP
API
Team
RepL4NLP - 2025
Total Papers:- 15
Total Papers accross all years:- 121
Total Citations :- 0
1
2
»
Proceedings of the 10th Workshop on Representation Learning for NLP (RepL4NLP-2025)
Vaibhav Adlakha |
Alexandra Chronopoulou |
Xiang Lorraine Li |
Bodhisattwa Prasad Majumder |
Freda Shi |
Giorgos Vernikos |
Efficient Document-level Event Relation Extraction
Ruochen Li |
Zimu Wang |
Xinya Du |
Large Language Models Are Overparameterized Text Encoders
Thennal D K |
Tim Fischer |
Chris Biemann |
Tracking Universal Features Through Fine-Tuning and Model Merging
Niels Nielsen Horn |
Desmond Elliott |
DEPTH: Discourse Education through Pre-Training Hierarchically
Zachary Elisha Bamberger |
Ofek Glick |
Chaim Baskin |
Yonatan Belinkov |
Investigating Adapters for Parameter-efficient Low-resource Automatic Speech Recognition
Ahnaf Mozib Samin |
Shekhar Nayak |
Andrea De Marco |
Claudia Borg |
Vocabulary-level Memory Efficiency for Language Model Fine-tuning
Miles Williams |
Nikolaos Aletras |
Reverse Probing: Evaluating Knowledge Transfer via Finetuned Task Embeddings for Coreference Resolution
Tatiana Anikina |
Arne Binder |
David Harbecke |
Stalin Varanasi |
Leonhard Hennig |
Simon Ostermann |
Sebastian Möller |
Josef Van Genabith |
Amuro & Char: Analyzing the Relationship between Pre-Training and Fine-Tuning of Large Language Models
Kaiser Sun |
Mark Dredze |
State Space Models are Strong Text Rerankers
Zhichao Xu |
Jinghua Yan |
Ashim Gupta |
Vivek Srikumar |
Prompt Tuning Can Simply Adapt Large Language Models to Text Encoders
Kaiyan Zhao |
Qiyu Wu |
Zhongtao Miao |
Yoshimasa Tsuruoka |
Cross-Modal Learning for Music-to-Music-Video Description Generation
Zhuoyuan Mao |
Mengjie Zhao |
Qiyu Wu |
Zhi Zhong |
Wei-Hsiang Liao |
Hiromi Wakaki |
Yuki Mitsufuji |
A Comparative Study of Learning Paradigms in Large Language Models via Intrinsic Dimension
Saahith Janapati |
Yangfeng Ji |
Punctuation Restoration Improves Structure Understanding without Supervision
Junghyun Min |
Minho Lee |
Lee Woochul |
Yeonsoo Lee |
Choose Your Words Wisely: Domain-adaptive Masking Makes Language Models Learn Faster
Vanshpreet S. Kohli |
Aaron Monis |
Radhika Mamidi |
Conference Topic Distribution
Linguistic
Task
Approach
Language
Dataset
Conference Citation Distribution
Conference Papers have no Citations yet
Topics