You are here:
Publication details
Scaling Learned Metric Index to 100M Datasets
Authors | |
---|---|
Year of publication | 2024 |
Type | Article in Proceedings |
Conference | 17th International Conference on Similarity Search and Applications (SISAP 2024) |
MU Faculty or unit | |
Citation | |
Doi | http://dx.doi.org/10.1007/978-3-031-75823-2_22 |
Keywords | learned metric index;high-dimensional data;memory efficiency;on-disk index;approximate nearest neighbor search;similarity search |
Description | Learned indexing of high-dimensional data is an indexing approach that is still in the process of proving its viability – the Learned Metric Index (LMI) stands as one of the pioneering methods in this regard. Earlier implementation of LMI [Slanináková et al., SISAP 2023] primarily served as experimental prototype, operating under unrealistic assumptions, such as the availability of unlimited main memory or unbounded index construction time. Recently, however, LMI made the leap towards practical applicability on real-world datasets when it was successfully deployed to efficiently index 214 million protein structures for near-instantaneous retrieval [Procházka et al., Nucleic Acids Research 2024]. This paper details the key improvements that enabled this transition, including the introduction of parallel query processing (with the possibility of GPU acceleration), adaptive memory usage, pre-construction of memory buckets for contiguous access, a shift from k-means to spherical k-means clustering, and faster index construction through fewer epochs and the use of smaller training samples. LMI is now capable of handling 100M datasets and supports both in-memory and on-disk indexing, marking several important steps towards practical viability of AI-enhanced indexes for high-dimensional complex data in real-world settings. |
Related projects: |