You are here:
Publication details
Fusion Strategies for Large-Scale Multi-modal Image Retrieval
Authors | |
---|---|
Year of publication | 2017 |
Type | Article in Proceedings |
Conference | Transactions on Large-Scale Data- and Knowledge-Centered Systems XXXIII |
MU Faculty or unit | |
Citation | |
Doi | http://dx.doi.org/10.1007/978-3-662-55696-2_5 |
Field | Informatics |
Keywords | Multimodal image retrieval; fusion strategies; evaluation |
Description | Large-scale data management and retrieval in complex domains such as images, videos, or biometrical data remains one of the most important and challenging information processing tasks. Even after two decades of intensive research, many questions still remain to be answered before working tools become available for everyday use. In this work, we focus on the practical applicability of different multi-modal retrieval techniques. Multi-modal searching, which combines several complementary views on complex data objects, follows the human thinking process and represents a very promising retrieval paradigm. However, a rapid development of modality fusion techniques in several diverse directions and a lack of comparisons between individual approaches have resulted in a confusing situation when the applicability of individual solutions is unclear. Aiming at improving the research community’s comprehension of this topic, we analyze and systematically categorize existing multimodal search techniques, identify their strengths, and describe selected representatives. In the second part of the paper, we focus on the specific problem of large-scale multi-modal image retrieval on the web. We analyze the requirements of such task, implement several applicable fusion methods, and experimentally evaluate their performance in terms of both efficiency and effectiveness. The extensive experiments provide a unique comparison of diverse approaches to modality fusion in equal settings on two large real-world datasets. |
Related projects: |