Publication details

Does Size Matter? - Comparing Evaluation Dataset Size for the Bilingual Lexicon Induction

Authors

DENISOVÁ Michaela RYCHLÝ Pavel

Year of publication 2023
Type Article in Proceedings
Conference Proceedings of the Seventeenth Workshop on Recent Advances in Slavonic Natural Languages Processing, RASLAN 2023
MU Faculty or unit

Faculty of Informatics

Citation
web
Keywords Cross-lingual word embeddings; Bilingual lexicon induction; Evaluation dataset’s size
Description Cross-lingual word embeddings have been a popular approach for inducing bilingual lexicons. However, the evaluation of this task varies from paper to paper, and gold standard dictionaries used for the evaluation are frequently criticised for occurring mistakes. Although there have been efforts to unify the evaluation and gold standard dictionaries, we propose a new property that should be considered when compiling an evaluation dataset: size. In this paper, we evaluate three baseline models on three diverse language pairs (Estonian-Slovak, Czech-Slovak, English-Korean) and experiment with evaluation datasets of various sizes: 200, 500, 1.5K, and 3K source words. Moreover, we compare the results with manual error analysis. In this experiment, we show whether the size of an evaluation dataset impacts the results and how to select the ideal evaluation dataset size. We make our code and datasets publicly available.

You are running an old browser version. We recommend updating your browser to its latest version.

More info