You are here:
Publication details
Selecting text entries using a few positive samples and similarity
Authors | |
---|---|
Year of publication | 2011 |
Type | Article in Periodical |
Magazine / Source | Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis |
MU Faculty or unit | |
Citation | |
Field | Informatics |
Keywords | unlabeled text documents; one-class categorization; text similarity; ranking by similarity; pattern recognition; machine learning; natural language processing; non-semantic documents |
Description | This research was inspired by procedures that are used by human bibliographic searchers: Given some textual and only 'positive' (relevant, interesting) examples coming just from one category, find promptly and simply in an available collection of various unlabeled documents the most similar ones that belong to a relevant topic defined by an applicant. The problem of the categorization of unlabeled relevant and irrelevant textual documents is here solved by using a small subset of relevant available patterns labeled manually in advance. Unlabeled text items are compared with such labeled patterns. The unlabeled samples are then ranked according their degree of similarity with the patterns. At the top of the rank, there are the most similar (relevant) items. Entries receding from the rank top represent gradually less and less similar entries. The authors emphasize that this simple method, aimed at processing large volumes of text entries, provides initial filtering results from the accuracy point of view and the users can avoid the demanding task of labeling too many training examples to be able to apply a chosen classifier, and at the same time, they can obtain quickly the relevant items. The ranking-based approach gives results that can be possibly further used for the following text-item processing where the number of irrelevant items is already not so high as at the beginning. Even if this relatively simple automatic search is not errorless due to the overlapping of documents, it can help process particularly very large unstructured textual data volumes. |