Publication details

SlamaTrain– Representative Training Dataset for Slavonic Large Language Models

Authors

MEDVEĎ Marek SABOL Radoslav HORÁK Aleš

Year of publication 2024
Type Article in Proceedings
Conference Recent Advances in Slavonic Natural Language Processing, RASLAN 2024
MU Faculty or unit

Faculty of Informatics

Citation
web Konferenční sborník
Keywords Slama models, LLM, large language models, training, dataset
Attached files
Description The Slama project focuses on building a series of foundational language models for Slavonic languages. Even though the latest developmentyieldsanumberofnewlargepre-trainedandfine-tunedmodels,the main data source came from English-written websites. Therefore the majority of the training data that is used for language model development consists oftheEnglishlanguage.MultilinguallanguagemodelslikeLlama, GPT-4o,mT5,etc.arealsopredominantly(around80%)trainedontheEnglish language, even though they capture the structure of dozens of languages. In this paper, we detail the process of acquiring one of the largest training datasets for Czech, Slovak and other Slavonic languages. We started with huge multi-lingual datasets, extracted the mono-lingual data and joined them with other sources. The combined mono-lingual datasets were then cleaned, deduplicated and filtered for adult content. As a result, we have obtained 71 billion tokens for the Czech and Slovak languages suitable for the Slama language models training.

You are running an old browser version. We recommend updating your browser to its latest version.

More info