1525: Cross-language structural priming in recurrent neural network language models
Автор: CogSci: Interdisciplinary Study of the Mind
Загружено: 2021-10-22
Просмотров: 174
Описание:
Stefan Frank
Recurrent neural network (RNN) language models that are trained on large text corpora have shown a remarkable ability to capture properties of human syntactic processing (Linzen & Baroni, 2021). For example, the fact that these models display human-like structural priming effects (Prasad, Van Schijndel, & Linzen, 2019; van Schijndel & Linzen, 2018) suggests that they develop implicit syntactic representations that may not be unlike those of the human language system. A rarely explored question is whether RNNs are also able to simulate aspects of human multilingual sentence processing (Frank, 2021) even though training RNNs on two or more languages simultaneously is technically unproblematic.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: