Metadata-Version: 2.4 Name: qdrant-client Version: 1.16.1 Summary: Client library for the Qdrant vector search engine License: Apache-2.0 License-File: LICENSE Keywords: vector,search,neural,matching,client Author: Andrey Vasnetsov Author-email: andrey@qdrant.tech Requires-Python: >=3.9 Classifier: License :: OSI Approved :: Apache Software License Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Classifier: Programming Language :: Python :: 3.14 Provides-Extra: fastembed Provides-Extra: fastembed-gpu Requires-Dist: fastembed (>=0.7,<0.8) ; extra == "fastembed" Requires-Dist: fastembed-gpu (>=0.7,<0.8) ; extra == "fastembed-gpu" Requires-Dist: grpcio (>=1.41.0) Requires-Dist: httpx[http2] (>=0.20.0) Requires-Dist: numpy (>=1.21) ; python_version >= "3.10" and python_version < "3.12" Requires-Dist: numpy (>=1.21,<2.1.0) ; python_version < "3.10" Requires-Dist: numpy (>=1.26) ; python_version == "3.12" Requires-Dist: numpy (>=2.1.0) ; python_version >= "3.13" Requires-Dist: portalocker (>=2.7.0,<4.0) Requires-Dist: protobuf (>=3.20.0) Requires-Dist: pydantic (>=1.10.8,!=2.0.*,!=2.1.*,!=2.2.0) Requires-Dist: urllib3 (>=1.26.14,<3) Project-URL: Homepage, https://github.com/qdrant/qdrant-client Project-URL: Repository, https://github.com/qdrant/qdrant-client Description-Content-Type: text/markdown

Qdrant

Python Client library for the Qdrant vector search engine.

PyPI version OpenAPI Docs Apache 2.0 License Discord Roadmap 2025

# Python Qdrant Client Client library and SDK for the [Qdrant](https://github.com/qdrant/qdrant) vector search engine. Library contains type definitions for all Qdrant API and allows to make both Sync and Async requests. Client allows calls for all [Qdrant API methods](https://api.qdrant.tech/) directly. It also provides some additional helper methods for frequently required operations, e.g. initial collection uploading. See [QuickStart](https://qdrant.tech/documentation/quick-start/#create-collection) for more details! ## Installation ``` pip install qdrant-client ``` ## Features - Type hints for all API methods - Local mode - use same API without running server - REST and gRPC support - Minimal dependencies - Extensive Test Coverage ## Local mode

Qdrant

Python client allows you to run same code in local mode without running Qdrant server. Simply initialize client like this: ```python from qdrant_client import QdrantClient client = QdrantClient(":memory:") # or client = QdrantClient(path="path/to/db") # Persists changes to disk ``` Local mode is useful for development, prototyping and testing. - You can use it to run tests in your CI/CD pipeline. - Run it in Colab or Jupyter Notebook, no extra dependencies required. See an [example](https://colab.research.google.com/drive/1Bz8RSVHwnNDaNtDwotfPj0w7AYzsdXZ-?usp=sharing) - When you need to scale, simply switch to server mode. ## Connect to Qdrant server To connect to Qdrant server, simply specify host and port: ```python from qdrant_client import QdrantClient client = QdrantClient(host="localhost", port=6333) # or client = QdrantClient(url="http://localhost:6333") ``` You can run Qdrant server locally with docker: ```bash docker run -p 6333:6333 qdrant/qdrant:latest ``` See more launch options in [Qdrant repository](https://github.com/qdrant/qdrant#usage). ## Connect to Qdrant cloud You can register and use [Qdrant Cloud](https://cloud.qdrant.io/) to get a free tier account with 1GB RAM. Once you have your cluster and API key, you can connect to it like this: ```python from qdrant_client import QdrantClient qdrant_client = QdrantClient( url="https://xxxxxx-xxxxx-xxxxx-xxxx-xxxxxxxxx.us-east.aws.cloud.qdrant.io:6333", api_key="", ) ``` ## Inference API Qdrant Client has Inference API that allows to seamlessly create embeddings and use them in Qdrant. Inference API can be used locally with FastEmbed or remotely with models available in Qdrant Cloud. ### Local Inference with FastEmbed ``` pip install qdrant-client[fastembed] ``` FastEmbed is a library for creating fast vector embeddings on CPU. It is based on ONNX Runtime and allows to run inference both on CPU and GPU. Qdrant Client can use FastEmbed to create embeddings and upload them to Qdrant. This allows to simplify API and make it more intuitive. ```python from qdrant_client import QdrantClient, models # running qdrant in local mode suitable for experiments client = QdrantClient(":memory:") # or QdrantClient(path="path/to/db") for local mode and persistent storage model_name = "sentence-transformers/all-MiniLM-L6-v2" payload = [ {"document": "Qdrant has Langchain integrations", "source": "Langchain-docs", }, {"document": "Qdrant also has Llama Index integrations", "source": "LlamaIndex-docs"}, ] docs = [models.Document(text=data["document"], model=model_name) for data in payload] ids = [42, 2] client.create_collection( "demo_collection", vectors_config=models.VectorParams( size=client.get_embedding_size(model_name), distance=models.Distance.COSINE) ) client.upload_collection( collection_name="demo_collection", vectors=docs, ids=ids, payload=payload, ) search_result = client.query_points( collection_name="demo_collection", query=models.Document(text="This is a query document", model=model_name) ).points print(search_result) ``` FastEmbed can also utilise GPU for faster embeddings. To enable GPU support, install ```bash pip install 'qdrant-client[fastembed-gpu]' ``` In order to set GPU, extend documents from the previous example with `options`. ```python models.Document(text="To be computed on GPU", model=model_name, options={"cuda": True}) ``` > Note: `fastembed-gpu` and `fastembed` are mutually exclusive. You can only install one of them. > > If you previously installed `fastembed`, you might need to start from a fresh environment to install `fastembed-gpu`. ### Remote inference with Qdrant Cloud Qdrant Cloud provides a set of predefined models that can be used for inference without a need to install any additional libraries or host models locally. (Currently available only on paid plans.) Inference API is the same as in the local mode, but the client has to be instantiated with `cloud_inference=True`: ```python from qdrant_client import QdrantClient client = QdrantClient( url="https://xxxxxx-xxxxx-xxxxx-xxxx-xxxxxxxxx.us-east.aws.cloud.qdrant.io:6333", api_key="", cloud_inference=True, # Enable remote inference ) ``` > Note: remote inference requires images to be provided as base64 encoded strings or urls ## Examples Create a new collection ```python from qdrant_client.models import Distance, VectorParams client.create_collection( collection_name="my_collection", vectors_config=VectorParams(size=100, distance=Distance.COSINE), ) ``` Insert vectors into a collection ```python import numpy as np from qdrant_client.models import PointStruct vectors = np.random.rand(100, 100) # NOTE: consider splitting the data into chunks to avoid hitting the server's payload size limit # or use `upload_collection` or `upload_points` methods which handle this for you # WARNING: uploading points one-by-one is not recommended due to requests overhead client.upsert( collection_name="my_collection", points=[ PointStruct( id=idx, vector=vector.tolist(), payload={"color": "red", "rand_number": idx % 10} ) for idx, vector in enumerate(vectors) ] ) ``` Search for similar vectors ```python query_vector = np.random.rand(100) hits = client.query_points( collection_name="my_collection", query=query_vector, limit=5 # Return 5 closest points ) ``` Search for similar vectors with filtering condition ```python from qdrant_client.models import Filter, FieldCondition, Range hits = client.query_points( collection_name="my_collection", query=query_vector, query_filter=Filter( must=[ # These conditions are required for search results FieldCondition( key='rand_number', # Condition based on values of `rand_number` field. range=Range( gte=3 # Select only those results where `rand_number` >= 3 ) ) ] ), limit=5 # Return 5 closest points ) ``` See more examples in our [Documentation](https://qdrant.tech/documentation/)! ### gRPC To enable (typically, much faster) collection uploading with gRPC, use the following initialization: ```python from qdrant_client import QdrantClient client = QdrantClient(host="localhost", grpc_port=6334, prefer_grpc=True) ``` ## Async client Starting from version 1.6.1, all python client methods are available in async version. To use it, just import `AsyncQdrantClient` instead of `QdrantClient`: ```python import asyncio import numpy as np from qdrant_client import AsyncQdrantClient, models async def main(): # Your async code using QdrantClient might be put here client = AsyncQdrantClient(url="http://localhost:6333") await client.create_collection( collection_name="my_collection", vectors_config=models.VectorParams(size=10, distance=models.Distance.COSINE), ) await client.upsert( collection_name="my_collection", points=[ models.PointStruct( id=i, vector=np.random.rand(10).tolist(), ) for i in range(100) ], ) res = await client.query_points( collection_name="my_collection", query=np.random.rand(10).tolist(), # type: ignore limit=10, ) print(res) asyncio.run(main()) ``` Both, gRPC and REST API are supported in async mode. More examples can be found [here](./tests/test_async_qdrant_client.py). ### Development This project uses git hooks to run code formatters. Set up hooks with `pre-commit install` before making contributions.