Cross-lingual transfer of knowledge in distributional language models: Experiments in Hungarian
Cross-lingual transfer of knowledge in distributional language models: Experiments in Hungarian
Author(s): Attila Novák, Borbála NovákSubject(s): Morphology
Published by: Akadémiai Kiadó
Keywords: distributional vs. generative models of language; zero-shot cross-lingual knowledge transfer; multilingual contextual neural language models; meaning representation parsing; named entity recognition
Summary/Abstract: In this paper, we argue that the very convincing performance of recent deep-neural-model-based NLP applications has demonstrated that the distributionalist approach to language description has proven to be more successful than the earlier subtle rule-based models created by the generative school. The now ubiquitous neural models can naturally handle ambiguity and achieve human-like linguistic performance with most of their training consisting only of noisy raw linguistic data without any multimodal grounding or external supervision refuting Chomsky's argument that some generic neural architecture cannot arrive at the linguistic performance exhibited by humans given the limited input available to children. In addition, we demonstrate in experiments with Hungarian as the target language that the shared internal representations in multilingually trained versions of these models make them able to transfer specific linguistic skills, including structured annotation skills, from one language to another remarkably efficiently.
- Issue Year: 69/2022
- Issue No: 4
- Page Range: 405-449
- Page Count: 45
- Language: English