You are here:
Publication details
Reproducible experiments with Learned Metric Index Framework
Authors | |
---|---|
Year of publication | 2023 |
Type | Article in Periodical |
Magazine / Source | Information systems |
MU Faculty or unit | |
Citation | |
web | https://www.sciencedirect.com/science/article/pii/S0306437923000911 |
Doi | http://dx.doi.org/10.1016/j.is.2023.102255 |
Keywords | Reproducible paper;Index structures;Learned index;Unstructured data;Content-based search;Metric space |
Description | This work is a companion reproducible paper of a previous paper (Antol et al., 2021) in which we presented an alternative to the traditional paradigm of similarity searching in metric spaces called the Learned Metric Index. Inspired by the advance in learned indexing of structured data, we used machine learning models to replace index pivots, thus posing similarity search as a classification problem. This implementation proved to be more than competitive with the conventional methods in terms of speed and recall, proving the concept as viable. The aim of this publication is to make our source code, datasets, and experiments publicly available. For this purpose, we create a collection of Python3 software libraries, YAML reproducible experiment files, and JSON ground-truth files, all bundled in a Docker image – the Learned Metric Index Framework (LMIF) – which can be run using any Docker-compatible operating system on a CPU with Advanced vector extensions (AVX). We introduce a reproducibility protocol for our experiments using LMIF and provide a closer look at the experimental process. We introduce new experimental results by running the reproducibility protocol introduced herein and discussing the differences with the results reported in our primary work (Antol et al., 2021). Finally, we make an argument that these results can be considered weakly reproducible (in both of the performance metrics), since they point to the same conclusions derived in the primary paper. |
Related projects: |
|