NLPExplorer
Papers
Venues
Authors
Authors Timeline
Field of Study
URLs
ACL N-gram Stats
TweeNLP
API
Team
Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?
Byung-Doh Oh
|
William Schuler
|
Paper Details:
Year: 2023
Location: Cambridge, MA
Venue:
TACL |
Citations
URL
No Citations Yet
https://doi.org/10.1162/tacl
https://doi
https://doi.org/10.18653/v1/2022
https://doi
https://doi
https://doi.org/10
https://doi.org/10
https://doi.org/10.18653
https://doi.org/10.1007/978-1-4615
https://doi.org/10
https://doi.org
https://doi.org/10.1073/pnas.2122602119
https://doi.org/10.3115
https://doi.org/10
https://doi.org/10.18653
https://doi.org
https://doi.org/10
https://doi.org/10
https://doi.org/10.1016/j.cognition
https://doi.org/10.1016
https://doi.org
https://doi.org/10.3389/frai
https://doi.org/10.18653/v1/2021
https://doi.org/10.1111/cogs.12988
https://doi.org/10.1073/pnas
https://doi.org/10.18653/v1/P16
https://doi.org/10.1016/j
https://doi.org/10.1016/j
https://doi.org
https://doi.org/10.1016/j.cognition
https://github.com
https://doi.org/10.1016/j.bandl
https://doi
Field Of Study