Informace o publikaci

Negation Disrupts Compositionality in Language Models: The Czech Usecase

Autoři

VRABCOVÁ Tereza SOJKA Petr

Rok publikování 2024
Druh Článek ve sborníku
Konference The Eighteenth Workshop on Recent Advances in Slavonic Natural Language Processing
Fakulta / Pracoviště MU

Fakulta informatiky

Citace
www fulltext PDF
Klíčová slova negation; language models; machine learning
Popis In most Slavic languages, the negation is expressed by short “ne” tokens that do not affect discrete change in the meaning learned distributionally by language models. It manifests in many problems, such as Natural Language Inference (NLI). We have created a new dataset from CsFEVER, the Czech factuality dataset, by extendingitwithnegatedversionsofhypothesespresentinthe dataset. We used this new dataset to evaluate publicly available language models and study the impact of negation on the NLI problems. We have confirmed that compositionally computed representation of negation in transformers causes misunderstanding problems in Slavic languages such as Czech: The reasoning is flawed more often when the information is expressed using negation than when it is expressed positively without it. Our findings highlight the limitations of current transformer models in handling negation cues in Czech, emphasizing the need for further improvements to enhance language models’ understanding of Slavic languages.
Související projekty:

Používáte starou verzi internetového prohlížeče. Doporučujeme aktualizovat Váš prohlížeč na nejnovější verzi.

Další info