​​Filterable HNSW - part 2

In a previous article on the filter when searching for nearest neighbors, we discussed the theoretical background.
This time I am going to present a C++ implementation with Python bindings.

As a base implementation of HNSW I took hnswlib, stand-alone header-only implementation of HNSW.

With new implementation it is possible now to assign an arbitrary number of tags to any point with a simple code:

# ids - list of point ids
# tag - tag id
hnsw.add_tags(ids, tag)


The group of points under the same tag could be searched separately from others:

query_vector = ...
tag_to_search_in = 42
# Search among points with this tag
condition = [[(False, tag_to_search_in)]]
labels, dist = hnsw.knn_query(query_vector, k=10, conditions=condition)


These groups could also be combined using boolean expressions. For example (A | !B) & C is represented as [[(0, A), (1, B)], [(0, C)]], where A, B, C are logical clauses if respective tag is assigned to a point.

If the group is large enough ( >> 1/M fraction of all points), knn_query should work fine. But if the group is smaller, it may need to build additional connections in the HNSW graph for these groups.

hnsw.index_tagged(tag=42, m=8)


Based on the HNSW with categorical filtering, it is possible to build build a tool that can search in specified geo-region only.

Find a full version of this article with more examples and explanations in my blog.
ONNX and deployment libraries

Libraries like AllenNLP are great for model training and prototyping, they contain functions and helpers for almost any practical and theoretical task.
Some of these libraries even have functions for model serving, but they still might be a poor choice for a serving model in production.

Very same functionality, which makes them convenient for development, makes them hard to support in a production environment.
Docker image with only AllenNLP installed takes up a whole 1.9 GB compressed! It could hardly be called a micro-service.

In Tensorflow this problem was solved by saving computational graphs in a special serialization format, independent of training and preprocessing libraries.
This serialized view can later be served by the tensor serving service.
Good solution, but not universal - there are plenty of frameworks, like PyTorch, which does not follow Google's standard.

Now, this is a part where ONNX appears - an open standard for NN representation.
It defines a common set of operators - the building blocks of machine learning and deep learning models.
Not any valid Python-PyTorch model can be converted into ONNX representation. Only a subset of operations is also valid for ONNX.

Unfortunately, default implementation of most AllenNLP models does not fit this subset:

- AllenNLP model handles a vast variety of corner cases, conditions that are essentially python functions.
ONNX does not support arbitrary code execution, ONNX model should consist of computation graph only
- AllenNLP models take care of text preprocessing. It operates with dictionaries and tokenization. ONNX does not support these operations.

Luckily in most cases, AllenNLP models could be used as just a wrapper for actual model implementation.
For this, you need to have an AllenNLP model, which handles loss function, makes preprocessing, and interacts with the model trainer.
And also an internal class for the "pure" model, which implements standard nn.Module interface.
It should use tensors as input and output.
Internally it should construct a persistent computational graph.

This internal model now could be converted into the ONNX model and saved independently.

Having ONNX you can use whatever instrument you need to serve or explore your model.
Forwarded from Spark in me (Alexander)
Silero Speech-To-Text Models V1 Released

We are proud to announce that we have released our high-quality (i.e. on par with premium Google models) speech-to-text Models for the following languages:

- English
- German
- Spanish

Why this is a big deal:

- STT Research is typically focused on huge compute budgets
- Pre-trained models and recipes did not generalize well, were difficult to use even as-is, relied on obsolete tech
- Until now STT community lacked easy to use high quality production grade STT models

How we solve it:

-
We publish a set of pre-trained high-quality models for popular languages
- Our models are embarrassingly easy to use
- Our models are fast and can be run on commodity hardware

Even if you do not work with STT, please give us a star / share!

Links

- https://github.com/snakers4/silero-models
🔲 Qdrant - vector search engine

Since my last post about filtrable HNSW
I was working on a new Search Engine to give this idea a proper implementation.
And I finally published an alpha version of the engine called Qdrant.

Development is still in an early stage, but it already provides ElasticSearch-like conditions must, should and must_not which you can combine to represent an arbitrary condition.

Use-cases

You might need Qdrant in cases when a vector could not fully represent a sought object.
For example, a neural network might model a visual appearance of a piece of clothing, but can hardly consider its stock availability.

With Qdrant you can assign this feature as a payload and use it for filtering.

Among the possible applications:

- Semantic search with facets
- Semantic search on map
- Matching engines - e.g. Candidates and job positions
- Personal recommendations

Technical highlights

Qdrant is written in Rust, the language specially designed for system programming - the building of services that are used by other services.
Rust is comparable in speed with C but also protects from data races what is crucial for database applications.
Push the crab 🦀 if you are interested in more Rust-specific details of the project.

The engine uses write-ahead logging. Once it confirmed an update - it won't lose data even in case of power shut down.

You can already try it with Docker image:

docker pull generall/qdrant


Simple search request could look like this:

POST /test_collection/points/search
{
"filter": {
"should": [
{
"match": {
"key": "city",
"keyword": "London"
}
}
]
},
"vector": [0.2, 0.1, 0.9, 0.7],
"top": 3
}


All APIs are documented with OpenAPI 3.0.
It provides an easy way to generate client for any programming language.

I would highly appreciate any feedback on the project, and I will be grateful if you give it a star on GitHub.
For those who reacted with 🦀 on a previous post. I wrote a Twitter thread on how I am building Qdrant with Rust. It is on Twitter because the development is still in progress, and I would like to tell you about some interesting details without a special blog-post.

Some topics of the thread:

- How Qdrant is useful?
- How it stores data and build indexes?
- How to keep data always available for search?
- How do I auto-generate documentation in Rust?

Your comments are welcome here and on Twitter!
​​Metric Learning Tips & Tricks

Hi everyone, I don't often have posts on this channel.
But today, I want to share some insights about what I'm working on.

Over the last year, I have been developing on a job matching system, and in the process, I have solved some interesting problems related to metric(similarity) learning.

I decided to collect all the interesting solutions into an article.

Here are some highlights:

- We have embeddings for professions and you can play with them online
- There is a way to train a model without labeled data, but it requires some tricks
- Hard Negative Mining does not work, but you can increase batch size instead
- It is possible to estimate embedding confidence
- We can micro-manage the model without re-training. Introducing the neural rules
- How do we deploy metric learning in production. Spoiler: with Qdrant
Neural Search Step-by-Step

We made a tutorial on Semantic Embeddings and Neural Search. With this guide, you will build your own semantic search service from scratch.

You won't need any complicated training of the neural network. Moreover, you can do all preparation steps in the Google Colab notebook.

Tutorial includes:
- What is the Neural Search?
- Getting embeddings from BERT Encoder
- Using vector search engine Qdrant
- Creating an API server with FastAPI.


If you want to learn how to build projects like this, the tutorial is for you.
Hello everyone,

The largest Russian-speaking data science community ODS.ai organizes the Summer School, designed after the famous Google SoC, and I participate in it as a mentor for the Metric Learning track with Qdrant.

During the track, participants are challenged to research the fine-tuning for the similarity learning, build a working prototype and contribute to Open Source.

Today at 18:00 MSK, there will be a first meetup of the track. I will be talking about:
- Which datasets are suitable for similarity matching, how can you obtain self-supervised
- Approaches to fine-tuning encoders. Selection of method depending on the amount of available annotation


I invite everyone interested to Spatial Chat ODS, at 18:00 MSK. Password: odssummerofcodeison.

Language of the event: Russian.

Materials will be available in English later on the channel.
​​ODS.ai Summer of Code results

Hi everyone, ODS SoC has officially finished in the last week, and it is time to present the results.
First of all, the winner of the Metric Learning track Tatiana Grechishcheva has published a detailed article on her work of fine-tuning and deploying metric learning models.
She fine-tuned the ViT model for matching similar clothing and put together a detailed tutorial of how you can deploy such a model to production.
An online demo is also included!

There are also some exciting results on fine-tuning transformers with different types of head layers.
In a nutshell, the result is that it is enough to have only a couple hundred examples to improve the similarity matching result without overfitting.
I will make a separate post about it and further plan on making metric learning practical.
Awesome Metric Learning
The Metric Learning approach to data science problems is heavily underutilized. There are a lot of academic research papers around it but much fewer practical guides and tutorials.
So we decided that we could help people adopt metric learning by collecting related materials in one place.
We are publishing a curated list of awesome practical metric learning tools, libraries, and materials - https://github.com/qdrant/awesome-metric-learning!
This collection aims to put together references to all relevant materials for building your application using Metric Learning.
If you know some exciting article, helpful tool, or a blog post that helped you apply metric learning - feel free to PR your proposal!
Against Putin 🇺🇦

Hi everyone, today's post is not about machine learning.

Everyone may already know that Putin unleashed the war in Ukraine.
Thousands of people are dying under his tanks because of the maniacal ambitions of a madman.

It is tough to resist a tyrant, Russians did not make it, but I want to believe that Ukrainians will do.
Two years ago, among thousands of others, I was arrested and prosecuted for a peaceful protest. Unfortunately, it changed nothing.
No one has to be a hero, and I don't hold Russian people responsible for what the regime has done to our countries.

But I want to urge the Russian IT community to do at least what little is left in your power to stop this nightmare.
Try to find a way to avoid helping the occupation government.
Move while you still can. Quit working for government companies or projects. Don't help to build surveillance, censorship, and propaganda. Avoid paying taxes to the murderers.

It will be worse if nobody stops it.
​​Triplet loss - Advanced Intro

Loss functions in metric learning are all chasing the same goal - to make positive pairs closer and negative further.
But the way they achieve this leads to different results and different side effects.

In today's post, we describe the differences between Triplet and Contrastive loss, why the use of Triplet loss can give an advantage, especially in the context of fine-tuning.
It also covers the approach to an efficient implementation of batch-all triplet mining.
Metric Learning for Anomaly Detection

Anomaly detection is one of those tasks to which it is challenging to apply classical ML methods directly.
The balancing of normal and abnormal examples and the internal inconsistency of anomalies make classifier training a challenging task.

And the difficulty is often related to data labeling, which in the case of anomalies may not be trivial.

The metric learning approach avoids the explicit separation into classes while combining the advantage of modeling the subject domain with the knowledge of specific anomalous examples.

In our case study, we are solving the problem of estimating the quality of coffee beans ☕️🌱 and determining the type of defects.

We trained Auto-Encoder on unlabeled samples and made fine-tuning on a small fraction of labeled ones.
This approach achieves results equivalent to conventional classification but requires orders of magnitude less labeled data.
Similarity Learning lacks a framework. So we built one.

Many general-purpose frameworks allow you to train Computer Vision or NLP tasks quickly. However, Similarity Learning has peculiarities, which usually require an additional layer of complexity on top of the usual pipelines.
So, for example, the batch size in the training of similarity models has a much greater role than in other models. Labels either do not exist or are handled in a completely different way. In many cases, the model is already pre-trained, which also adjusts the process.

Developing Similarity Learning models one after another, we began to notice patterns that helped us generalize and bring all our experience with training and fine-tuning such models into one package.
Yesterday we published Quaterion — an open-source, blazing-fast, customizable, scalable framework for training and fine-tuning similarity learning models.
This media is not supported in your browser
VIEW IN TELEGRAM
One of the main features of the framework is caching. It allows you to infer large models only once and then use cached vectors during the training. It speeds up the process x100 times, simultaneously allowing you to use batch sizes that are unattainable in other ways.

(gif)
​​How many layers to fine-tune?

Model fine-tuning allows you to improve the quality of the pre-trained models with just a fraction of the resources spent on training the original model. But there is a trade-off between the number of layers you tune and the precision you get.
Using fewer layers allows for faster training with a larger batch size, while more layers increase the model's capacity.
We've done experiments so you can make more educated choices.
Highlights:

- Training only the head of a model (5% of weights) gives x2 boost on metrics, while full training gives only x3.
- Training only a head layer allows using larger models with bigger batch sizes, compensating for the precision.
- If you only have a small dataset, full model tuning will give a more negligible effect
Vector Similaruty beyond Search

Vector similarity offers a range of powerful functions that go far beyond those available in traditional full-text search engines and the conventional kNN search.

We just scratched the surface of the topic but already found a lot of new ways to interact with the data, including:

- Dissimilarity search - that can be applied to anomaly detection, mislabeling detection, and data cleaning.
- Diversity search - that can be used for giving a better overview of the data, with no query at all.
- Recommendations - where we can do beyond the single query vector and use positive and negative examples to find the most relevant items.
- Discovery or Exploration - where we can invert the logic behind triplet-loss to provide real-time improvements of the search results.


In the article, we are talking about a new toolbox for unstructured data exploration, where the search is just one of the instruments.
And maybe you will find there a tool to implement your next big idea 🙂


https://qdrant.tech/articles/vector-similarity-beyond-search/
Attention all Berliners!

You are invited to our first offline meetup!
Join us for talks on vector search, machine learning, and more.
I will also be participating, providing an overview of Qdrant's progress and future plans.

Still not convinced? We will have free pizza and beer!

The event is scheduled for December 8, 2023, at 18:00, in Berlin.
Please register https://lu.ma/vectorspace.
2024/05/03 17:43:29
Back to Top
HTML Embed Code: