AsyncIOMotorCollection

class motor.motor_asyncio.AsyncIOMotorCollection(database, name, codec_options=None, read_preference=None, write_concern=None, read_concern=None, _delegate=None)
c[name] || c.name

Get the name sub-collection of AsyncIOMotorCollection c.

Raises InvalidName if an invalid collection name is used.

database

The AsyncIOMotorDatabase that this AsyncIOMotorCollection is a part of.

aggregate(pipeline, *args, **kwargs)

Execute an aggregation pipeline on this collection.

The aggregation can be run on a secondary if the client is connected to a replica set and its read_preference is not PRIMARY.

Parameters:
  • pipeline: a single command or list of aggregation commands

  • session (optional): a ClientSession, created with start_session().

  • **kwargs: send arbitrary parameters to the aggregate command

All optional aggregate command parameters should be passed as keyword arguments to this method. Valid options include, but are not limited to:

  • allowDiskUse (bool): Enables writing to temporary files. When set to True, aggregation stages can write data to the _tmp subdirectory of the –dbpath directory. The default is False.

  • maxTimeMS (int): The maximum amount of time to allow the operation to run in milliseconds.

  • batchSize (int): The maximum number of documents to return per batch. Ignored if the connected mongod or mongos does not support returning aggregate results using a cursor.

  • collation (optional): An instance of Collation.

  • let (dict): A dict of parameter names and values. Values must be constant or closed expressions that do not reference document fields. Parameters can then be accessed as variables in an aggregate expression context (e.g. "$$var"). This option is only supported on MongoDB >= 5.0.

Returns a MotorCommandCursor that can be iterated like a cursor from find():

async def f():
    pipeline = [{'$project': {'name': {'$toUpper': '$name'}}}]
    async for doc in collection.aggregate(pipeline):
        print(doc)

Note that this method returns a MotorCommandCursor which lazily runs the aggregate command when first iterated. In order to run an aggregation with $out or $merge the application needs to iterate the cursor, for example:

cursor = motor_coll.aggregate([{'$out': 'out'}])
# Iterate the cursor to run the $out (or $merge) operation.
await cursor.to_list(length=None)
# Or more succinctly:
await motor_coll.aggregate([{'$out': 'out'}]).to_list(length=None)
# Or:
async for _ in motor_coll.aggregate([{'$out': 'out'}]):
    pass

MotorCommandCursor does not allow the explain option. To explain MongoDB’s query plan for the aggregation, use MotorDatabase.command():

async def f():
    plan = await db.command(
        'aggregate', 'COLLECTION-NAME',
        pipeline=[{'$project': {'x': 1}}],
        explain=True)

    print(plan)

Changed in version 2.1: This collection’s read concern is now applied to pipelines containing the $out stage when connected to MongoDB >= 4.2.

Changed in version 1.0: aggregate() now always returns a cursor.

Changed in version 0.5: aggregate() now returns a cursor by default, and the cursor is returned immediately without an await. See aggregation changes in Motor 0.5.

Changed in version 0.2: Added cursor support.

aggregate_raw_batches(pipeline, **kwargs)

Perform an aggregation and retrieve batches of raw BSON.

Similar to the aggregate() method but returns each batch as bytes.

This example demonstrates how to work with raw batches, but in practice raw batches should be passed to an external library that can decode BSON into another data type, rather than used with PyMongo’s bson module.

async def get_raw():
    cursor = db.test.aggregate_raw_batches()
    async for batch in cursor:
        print(bson.decode_all(batch))

Note that aggregate_raw_batches does not support sessions.

Added in version 2.0.

coroutine bulk_write(requests: Sequence[_WriteOp[_DocumentType]], ordered: bool = True, bypass_document_validation: bool = False, session: ClientSession | None = None, comment: Any | None = None, let: Mapping | None = None) BulkWriteResult

Send a batch of write operations to the server.

Requests are passed as a list of write operation instances imported from pymongo: InsertOne, UpdateOne, UpdateMany, ReplaceOne, DeleteOne, or DeleteMany).

For example, say we have these documents:

{'x': 1, '_id': ObjectId('54f62e60fba5226811f634ef')}
{'x': 1, '_id': ObjectId('54f62e60fba5226811f634f0')}

We can insert a document, delete one, and replace one like so:

# DeleteMany, UpdateOne, and UpdateMany are also available.
from pymongo import InsertOne, DeleteOne, ReplaceOne

async def modify_data():
    requests = [InsertOne({'y': 1}), DeleteOne({'x': 1}),
                ReplaceOne({'w': 1}, {'z': 1}, upsert=True)]
    result = await db.test.bulk_write(requests)

    print("inserted %d, deleted %d, modified %d" % (
        result.inserted_count, result.deleted_count, result.modified_count))

    print("upserted_ids: %s" % result.upserted_ids)

    print("collection:")
    async for doc in db.test.find():
        print(doc)

This will print something like:

inserted 1, deleted 1, modified 0
upserted_ids: {2: ObjectId('54f62ee28891e756a6e1abd5')}

collection:
{'x': 1, '_id': ObjectId('54f62e60fba5226811f634f0')}
{'y': 1, '_id': ObjectId('54f62ee2fba5226811f634f1')}
{'z': 1, '_id': ObjectId('54f62ee28891e756a6e1abd5')}
Parameters:
  • requests: A list of write operations (see examples above).

  • ordered (optional): If True (the default) requests will be performed on the server serially, in the order provided. If an error occurs all remaining operations are aborted. If False requests will be performed on the server in arbitrary order, possibly in parallel, and all operations will be attempted.

  • bypass_document_validation: (optional) If True, allows the write to opt-out of document level validation. Default is False.

  • session (optional): a ClientSession, created with start_session().

  • comment (optional): A user-provided comment to attach to this command.

Returns:

An instance of BulkWriteResult.

Note

bypass_document_validation requires server version >= 3.2

Changed in version 3.0: Added comment parameter.

Changed in version 1.2: Added session parameter.

coroutine count_documents(filter: Mapping[str, Any], session: ClientSession | None = None, comment: Any | None = None, **kwargs: Any) int

Count the number of documents in this collection.

Note

For a fast count of the total documents in a collection see estimated_document_count().

The count_documents() method is supported in a transaction.

All optional parameters should be passed as keyword arguments to this method. Valid options include:

  • skip (int): The number of matching documents to skip before returning results.

  • limit (int): The maximum number of documents to count. Must be a positive integer. If not provided, no limit is imposed.

  • maxTimeMS (int): The maximum amount of time to allow this operation to run, in milliseconds.

  • collation (optional): An instance of Collation.

  • hint (string or list of tuples): The index to use. Specify either the index name as a string or the index specification as a list of tuples (e.g. [(‘a’, pymongo.ASCENDING), (‘b’, pymongo.ASCENDING)]).

The count_documents() method obeys the read_preference of this Collection.

Note

When migrating from count() to count_documents() the following query operators must be replaced:

Operator

Replacement

$where

$expr

$near

$geoWithin with $center

$nearSphere

$geoWithin with $centerSphere

Parameters:
  • filter (required): A query document that selects which documents to count in the collection. Can be an empty document to count all documents.

  • session (optional): a ClientSession.

  • comment (optional): A user-provided comment to attach to this command.

  • **kwargs (optional): See list of options above.

coroutine create_index(keys: _IndexKeyHint, session: ClientSession | None = None, comment: Any | None = None, **kwargs: Any) str

Creates an index on this collection.

Takes either a single key or a list of (key, direction) pairs. The key(s) must be an instance of basestring (str in python 3), and the direction(s) must be one of (ASCENDING, DESCENDING, GEO2D, GEOHAYSTACK, GEOSPHERE, HASHED, TEXT).

To create a single key ascending index on the key 'mike' we just use a string argument:

await my_collection.create_index("mike")

For a compound index on 'mike' descending and 'eliot' ascending we need to use a list of tuples:

await my_collection.create_index([("mike", pymongo.DESCENDING),
                                  ("eliot", pymongo.ASCENDING)])

All optional index creation parameters should be passed as keyword arguments to this method. For example:

await my_collection.create_index([("mike", pymongo.DESCENDING)],
                                 background=True)

Valid options include, but are not limited to:

  • name: custom name to use for this index - if none is given, a name will be generated.

  • unique: if True creates a uniqueness constraint on the index.

  • background: if True this index should be created in the background.

  • sparse: if True, omit from the index any documents that lack the indexed field.

  • bucketSize: for use with geoHaystack indexes. Number of documents to group together within a certain proximity to a given longitude and latitude.

  • min: minimum value for keys in a GEO2D index.

  • max: maximum value for keys in a GEO2D index.

  • expireAfterSeconds: <int> Used to create an expiring (TTL) collection. MongoDB will automatically delete documents from this collection after <int> seconds. The indexed field must be a UTC datetime or the data will not expire.

  • partialFilterExpression: A document that specifies a filter for a partial index.

  • collation (optional): An instance of Collation.

See the MongoDB documentation for a full list of supported options by server version.

Warning

dropDups is not supported by MongoDB 3.0 or newer. The option is silently ignored by the server and unique index builds using the option will fail if a duplicate value is detected.

Note

partialFilterExpression requires server version >= 3.2

Note

The write_concern of this collection is automatically applied to this operation.

Parameters:
  • keys: a single key or a list of (key, direction) pairs specifying the index to create.

  • session (optional): a ClientSession, created with start_session().

  • comment (optional): A user-provided comment to attach to this command.

  • **kwargs (optional): any additional index creation options (see the above list) should be passed as keyword arguments

Returns a Future.

See also

The MongoDB documentation on

indexes

coroutine create_indexes(indexes: Sequence[IndexModel], session: ClientSession | None = None, comment: Any | None = None, **kwargs: Any) list[str]

Create one or more indexes on this collection:

from pymongo import IndexModel, ASCENDING, DESCENDING

async def create_two_indexes():
    index1 = IndexModel([("hello", DESCENDING),
                         ("world", ASCENDING)], name="hello_world")
    index2 = IndexModel([("goodbye", DESCENDING)])
    print(await db.test.create_indexes([index1, index2]))

This prints:

['hello_world', 'goodbye_-1']
Parameters:
  • indexes: A list of IndexModel instances.

  • session (optional): a ClientSession, created with start_session().

  • comment (optional): A user-provided comment to attach to this command.

  • **kwargs (optional): optional arguments to the createIndexes command (like maxTimeMS) can be passed as keyword arguments.

The write_concern of this collection is automatically applied to this operation.

Changed in version 3.0: Added comment parameter.

Changed in version 1.2: Added session parameter.

coroutine create_search_index(model: Mapping[str, Any] | SearchIndexModel, session: ClientSession | None = None, comment: Any = None, **kwargs: Any) str

Create a single search index for the current collection.

Parameters:
  • model: The model for the new search index. It can be given as a SearchIndexModel instance or a dictionary with a model “definition” and optional “name”.

  • session (optional): a ClientSession.

  • comment (optional): A user-provided comment to attach to this command.

  • **kwargs (optional): optional arguments to the createSearchIndexes command (like maxTimeMS) can be passed as keyword arguments.

Returns:

The name of the new search index.

Note

requires a MongoDB server version 7.0+ Atlas cluster.

coroutine create_search_indexes(models: list[SearchIndexModel], session: ClientSession | None = None, comment: Any | None = None, **kwargs: Any) list[str]

Create multiple search indexes for the current collection.

Parameters:
  • models: A list of SearchIndexModel instances.

  • session (optional): a ClientSession.

  • comment (optional): A user-provided comment to attach to this command.

  • **kwargs (optional): optional arguments to the createSearchIndexes command (like maxTimeMS) can be passed as keyword arguments.

Returns:

A list of the newly created search index names.

Note

requires a MongoDB server version 7.0+ Atlas cluster.

coroutine delete_many(filter: Mapping[str, Any], collation: _CollationIn | None = None, hint: _IndexKeyHint | None = None, session: ClientSession | None = None, let: Mapping[str, Any] | None = None, comment: Any | None = None) DeleteResult

Delete one or more documents matching the filter.

If we have a collection with 3 documents like {'x': 1}, then:

async def clear_collection():
    result = await db.test.delete_many({'x': 1})
    print(result.deleted_count)

This deletes all matching documents and prints “3”.

Parameters:
  • filter: A query that matches the documents to delete.

  • collation (optional): An instance of Collation.

  • hint (optional): An index used to support the query predicate specified either by its string name, or in the same format as passed to create_index() (e.g. [('field', ASCENDING)]). This option is only supported on MongoDB 4.4 and above.

  • session (optional): a ClientSession, created with start_session().

  • let (optional): Map of parameter names and values. Values must be constant or closed expressions that do not reference document fields. Parameters can then be accessed as variables in an aggregate expression context (e.g. “$$var”).

  • comment (optional): A user-provided comment to attach to this command.

Returns:

Changed in version 3.0: Added let and comment parameters.

Changed in version 2.2: Added hint parameter.

Changed in version 1.2: Added session parameter.

coroutine delete_one(filter: Mapping[str, Any], collation: _CollationIn | None = None, hint: _IndexKeyHint | None = None, session: ClientSession | None = None, let: Mapping[str, Any] | None = None, comment: Any | None = None) DeleteResult

Delete a single document matching the filter.

If we have a collection with 3 documents like {'x': 1}, then:

async def clear_collection():
    result = await db.test.delete_one({'x': 1})
    print(result.deleted_count)

This deletes one matching document and prints “1”.

Parameters:
  • filter: A query that matches the document to delete.

  • collation (optional): An instance of Collation.

  • hint (optional): An index used to support the query predicate specified either by its string name, or in the same format as passed to create_index() (e.g. [('field', ASCENDING)]). This option is only supported on MongoDB 4.4 and above.

  • session (optional): a ClientSession, created with start_session().

  • let (optional): Map of parameter names and values. Values must be constant or closed expressions that do not reference document fields. Parameters can then be accessed as variables in an aggregate expression context (e.g. “$$var”).

  • comment (optional): A user-provided comment to attach to this command.

Returns:

Changed in version 3.0: Added let and comment parameters.

Changed in version 2.2: Added hint parameter.

Changed in version 1.2: Added session parameter.

coroutine distinct(key: str, filter: Mapping[str, Any] | None = None, session: ClientSession | None = None, comment: Any | None = None, **kwargs: Any) list

Get a list of distinct values for key among all documents in this collection.

Raises TypeError if key is not an instance of str.

All optional distinct parameters should be passed as keyword arguments to this method. Valid options include:

  • maxTimeMS (int): The maximum amount of time to allow the count command to run, in milliseconds.

  • collation (optional): An instance of Collation.

The distinct() method obeys the read_preference of this Collection.

Parameters:
  • key: name of the field for which we want to get the distinct values

  • filter (optional): A query document that specifies the documents from which to retrieve the distinct values.

  • session (optional): a ClientSession.

  • comment (optional): A user-provided comment to attach to this command.

  • **kwargs (optional): See list of options above.

coroutine drop(session: ClientSession | None = None, comment: Any | None = None, encrypted_fields: Mapping[str, Any] | None = None) None

Alias for drop_collection.

The following two calls are equivalent:

await db.foo.drop()
await db.drop_collection("foo")
coroutine drop_index(index_or_name: _IndexKeyHint, session: ClientSession | None = None, comment: Any | None = None, **kwargs: Any) None

Drops the specified index on this collection.

Can be used on non-existent collections or collections with no indexes. Raises OperationFailure on an error (e.g. trying to drop an index that does not exist). index_or_name can be either an index name (as returned by create_index), or an index specifier (as passed to create_index). An index specifier should be a list of (key, direction) pairs. Raises TypeError if index is not an instance of (str, unicode, list).

Warning

if a custom name was used on index creation (by passing the name parameter to create_index()) the index must be dropped by name.

Parameters:
  • index_or_name: index (or name of index) to drop

  • session (optional): a ClientSession.

  • comment (optional): A user-provided comment to attach to this command.

  • **kwargs (optional): optional arguments to the createIndexes command (like maxTimeMS) can be passed as keyword arguments.

Note

The write_concern of this collection is automatically applied to this operation.

coroutine drop_indexes(session: ClientSession | None = None, comment: Any | None = None, **kwargs: Any) None

Drops all indexes on this collection.

Can be used on non-existent collections or collections with no indexes. Raises OperationFailure on an error.

Parameters:
  • session (optional): a ClientSession.

  • comment (optional): A user-provided comment to attach to this command.

  • **kwargs (optional): optional arguments to the createIndexes command (like maxTimeMS) can be passed as keyword arguments.

Note

The write_concern of this collection is automatically applied to this operation.

coroutine drop_search_index(name: str, session: ClientSession | None = None, comment: Any | None = None, **kwargs: Any) None

Delete a search index by index name.

Parameters:
  • name: The name of the search index to be deleted.

  • session (optional): a ClientSession.

  • comment (optional): A user-provided comment to attach to this command.

  • **kwargs (optional): optional arguments to the dropSearchIndexes command (like maxTimeMS) can be passed as keyword arguments.

Note

requires a MongoDB server version 7.0+ Atlas cluster.

coroutine estimated_document_count(comment: Any | None = None, **kwargs: Any) int

Get an estimate of the number of documents in this collection using collection metadata.

The estimated_document_count() method is not supported in a transaction.

All optional parameters should be passed as keyword arguments to this method. Valid options include:

  • maxTimeMS (int): The maximum amount of time to allow this operation to run, in milliseconds.

Parameters:
  • comment (optional): A user-provided comment to attach to this command.

  • **kwargs (optional): See list of options above.

find(*args, **kwargs)

Create a MotorCursor. Same parameters as for PyMongo’s find().

Note that find does not require an await expression, because find merely creates a MotorCursor without performing any operations on the server. MotorCursor methods such as to_list() perform actual operations.

coroutine find_one(filter: Any | None = None, *args: Any, **kwargs: Any) _DocumentType | None

Get a single document from the database.

All arguments to find() are also valid arguments for find_one(), although any limit argument will be ignored. Returns a single document, or None if no matching document is found.

The find_one() method obeys the read_preference of this Motor collection instance.

Parameters:
  • filter (optional): a dictionary specifying the query to be performed OR any other type to be used as the value for a query for "_id".

  • *args (optional): any additional positional arguments are the same as the arguments to find().

  • **kwargs (optional): any additional keyword arguments are the same as the arguments to find().

  • max_time_ms (optional): a value for max_time_ms may be specified as part of **kwargs, e.g.:

await collection.find_one(max_time_ms=100)

Changed in version 1.2: Added session parameter.

coroutine find_one_and_delete(filter: Mapping[str, Any], projection: Mapping[str, Any] | Iterable[str] | None = None, sort: _IndexList | None = None, hint: _IndexKeyHint | None = None, session: ClientSession | None = None, let: Mapping[str, Any] | None = None, comment: Any | None = None, **kwargs: Any) _DocumentType

Finds a single document and deletes it, returning the document.

If we have a collection with 2 documents like {'x': 1}, then this code retrieves and deletes one of them:

async def delete_one_document():
    print(await db.test.count_documents({'x': 1}))
    doc = await db.test.find_one_and_delete({'x': 1})
    print(doc)
    print(await db.test.count_documents({'x': 1}))

This outputs something like:

2
{'x': 1, '_id': ObjectId('54f4e12bfba5220aa4d6dee8')}
1

If multiple documents match filter, a sort can be applied. Say we have 3 documents like:

{'x': 1, '_id': 0}
{'x': 1, '_id': 1}
{'x': 1, '_id': 2}

This code retrieves and deletes the document with the largest _id:

async def delete_with_largest_id():
    doc = await db.test.find_one_and_delete(
        {'x': 1}, sort=[('_id', pymongo.DESCENDING)])

This deletes one document and prints it:

{'x': 1, '_id': 2}

The projection option can be used to limit the fields returned:

async def delete_and_return_x():
    db.test.find_one_and_delete({'x': 1}, projection={'_id': False})

This prints:

{'x': 1}
Parameters:
  • filter: A query that matches the document to delete.

  • projection (optional): a list of field names that should be returned in the result document or a mapping specifying the fields to include or exclude. If projection is a list “_id” will always be returned. Use a mapping to exclude fields from the result (e.g. projection={‘_id’: False}).

  • sort (optional): a list of (key, direction) pairs specifying the sort order for the query. If multiple documents match the query, they are sorted and the first is deleted.

  • hint (optional): An index used to support the query predicate specified either by its string name, or in the same format as passed to create_index() (e.g. [('field', ASCENDING)]). This option is only supported on MongoDB 4.4 and above.

  • session (optional): a ClientSession, created with start_session().

  • let (optional): Map of parameter names and values. Values must be constant or closed expressions that do not reference document fields. Parameters can then be accessed as variables in an aggregate expression context (e.g. “$$var”).

  • comment (optional): A user-provided comment to attach to this command.

  • **kwargs (optional): additional command arguments can be passed as keyword arguments (for example maxTimeMS can be used with recent server versions).

This command uses the WriteConcern of this Collection when connected to MongoDB >= 3.2. Note that using an elevated write concern with this command may be slower compared to using the default write concern.

Changed in version 3.0: Added let and comment parameters.

Changed in version 2.2: Added hint parameter.

Changed in version 1.2: Added session parameter.

coroutine find_one_and_replace(filter: Mapping[str, Any], replacement: Mapping[str, Any], projection: Mapping[str, Any] | Iterable[str] | None = None, sort: _IndexList | None = None, upsert: bool = False, return_document: bool = False, hint: _IndexKeyHint | None = None, session: ClientSession | None = None, let: Mapping[str, Any] | None = None, comment: Any | None = None, **kwargs: Any) _DocumentType

Finds a single document and replaces it, returning either the original or the replaced document.

The find_one_and_replace() method differs from find_one_and_update() by replacing the document matched by filter, rather than modifying the existing document.

Say we have 3 documents like:

{'x': 1, '_id': 0}
{'x': 1, '_id': 1}
{'x': 1, '_id': 2}

Replace one of them like so:

async def replace_one_doc():
    original_doc = await db.test.find_one_and_replace({'x': 1}, {'y': 1})
    print("original: %s" % original_doc)
    print("collection:")
    async for doc in db.test.find():
        print(doc)

This will print:

original: {'x': 1, '_id': 0}
collection:
{'y': 1, '_id': 0}
{'x': 1, '_id': 1}
{'x': 1, '_id': 2}
Parameters:
  • filter: A query that matches the document to replace.

  • replacement: The replacement document.

  • projection (optional): A list of field names that should be returned in the result document or a mapping specifying the fields to include or exclude. If projection is a list “_id” will always be returned. Use a mapping to exclude fields from the result (e.g. projection={‘_id’: False}).

  • sort (optional): a list of (key, direction) pairs specifying the sort order for the query. If multiple documents match the query, they are sorted and the first is replaced.

  • upsert (optional): When True, inserts a new document if no document matches the query. Defaults to False.

  • return_document: If ReturnDocument.BEFORE (the default), returns the original document before it was replaced, or None if no document matches. If ReturnDocument.AFTER, returns the replaced or inserted document.

  • hint (optional): An index to use to support the query predicate specified either by its string name, or in the same format as passed to create_index() (e.g. [('field', ASCENDING)]). This option is only supported on MongoDB 4.4 and above.

  • session (optional): a ClientSession, created with start_session().

  • let (optional): Map of parameter names and values. Values must be constant or closed expressions that do not reference document fields. Parameters can then be accessed as variables in an aggregate expression context (e.g. “$$var”).

  • comment (optional): A user-provided comment to attach to this command.

  • **kwargs (optional): additional command arguments can be passed as keyword arguments (for example maxTimeMS can be used with recent server versions).

This command uses the WriteConcern of this Collection when connected to MongoDB >= 3.2. Note that using an elevated write concern with this command may be slower compared to using the default write concern.

Changed in version 3.0: Added let and comment parameters.

Changed in version 2.2: Added hint parameter.

Changed in version 1.2: Added session parameter.

coroutine find_one_and_update(filter: Mapping[str, Any], update: Mapping[str, Any] | _Pipeline, projection: Mapping[str, Any] | Iterable[str] | None = None, sort: _IndexList | None = None, upsert: bool = False, return_document: bool = False, array_filters: Sequence[Mapping[str, Any]] | None = None, hint: _IndexKeyHint | None = None, session: ClientSession | None = None, let: Mapping[str, Any] | None = None, comment: Any | None = None, **kwargs: Any) _DocumentType

Finds a single document and updates it, returning either the original or the updated document. By default find_one_and_update() returns the original version of the document before the update was applied:

async def set_done():
    print(await db.test.find_one_and_update(
        {'_id': 665}, {'$inc': {'count': 1}, '$set': {'done': True}}))

This outputs:

{'_id': 665, 'done': False, 'count': 25}}

To return the updated version of the document instead, use the return_document option.

from pymongo import ReturnDocument

async def increment_by_userid():
    print(await db.example.find_one_and_update(
        {'_id': 'userid'},
        {'$inc': {'seq': 1}},
        return_document=ReturnDocument.AFTER))

This prints:

{'_id': 'userid', 'seq': 1}

You can limit the fields returned with the projection option.

async def increment_by_userid():
    print(await db.example.find_one_and_update(
        {'_id': 'userid'},
        {'$inc': {'seq': 1}},
        projection={'seq': True, '_id': False},
        return_document=ReturnDocument.AFTER))

This results in:

{'seq': 2}

The upsert option can be used to create the document if it doesn’t already exist.

async def increment_by_userid():
    print(await db.example.find_one_and_update(
        {'_id': 'userid'},
        {'$inc': {'seq': 1}},
        projection={'seq': True, '_id': False},
        upsert=True,
        return_document=ReturnDocument.AFTER))

The result:

{'seq': 1}

If multiple documents match filter, a sort can be applied. Say we have these two documents:

{'_id': 665, 'done': True, 'result': {'count': 26}}
{'_id': 701, 'done': True, 'result': {'count': 17}}

Then to update the one with the great _id:

async def set_done():
    print(await db.test.find_one_and_update(
        {'done': True},
        {'$set': {'final': True}},
        sort=[('_id', pymongo.DESCENDING)]))

This would print:

{'_id': 701, 'done': True, 'result': {'count': 17}}
Parameters:
  • filter: A query that matches the document to update.

  • update: The update operations to apply.

  • projection (optional): A list of field names that should be returned in the result document or a mapping specifying the fields to include or exclude. If projection is a list “_id” will always be returned. Use a dict to exclude fields from the result (e.g. projection={‘_id’: False}).

  • sort (optional): a list of (key, direction) pairs specifying the sort order for the query. If multiple documents match the query, they are sorted and the first is updated.

  • upsert (optional): When True, inserts a new document if no document matches the query. Defaults to False.

  • return_document: If ReturnDocument.BEFORE (the default), returns the original document before it was updated, or None if no document matches. If ReturnDocument.AFTER, returns the updated or inserted document.

  • array_filters (optional): A list of filters specifying which array elements an update should apply. Requires MongoDB 3.6+.

  • hint (optional): An index to use to support the query predicate specified either by its string name, or in the same format as passed to create_index() (e.g. [('field', ASCENDING)]). This option is only supported on MongoDB 4.4 and above.

  • session (optional): a ClientSession, created with start_session().

  • let (optional): Map of parameter names and values. Values must be constant or closed expressions that do not reference document fields. Parameters can then be accessed as variables in an aggregate expression context (e.g. “$$var”).

  • comment (optional): A user-provided comment to attach to this command.

  • **kwargs (optional): additional command arguments can be passed as keyword arguments (for example maxTimeMS can be used with recent server versions).

This command uses the WriteConcern of this Collection when connected to MongoDB >= 3.2. Note that using an elevated write concern with this command may be slower compared to using the default write concern.

Changed in version 3.0: Added let and comment parameters.

Changed in version 2.2: Added hint parameter.

Changed in version 1.2: Added array_filters and session parameters.

find_raw_batches(*args, **kwargs)

Query the database and retrieve batches of raw BSON.

Similar to the find() method but returns each batch as bytes.

This example demonstrates how to work with raw batches, but in practice raw batches should be passed to an external library that can decode BSON into another data type, rather than used with PyMongo’s bson module.

async def get_raw():
    cursor = db.test.find_raw_batches()
    async for batch in cursor:
        print(bson.decode_all(batch))

Note that find_raw_batches does not support sessions.

Added in version 2.0.

coroutine index_information(session: ClientSession | None = None, comment: Any | None = None) MutableMapping[str, Any]

Get information on this collection’s indexes.

Returns a dictionary where the keys are index names (as returned by create_index()) and the values are dictionaries containing information about each index. The dictionary is guaranteed to contain at least a single key, "key" which is a list of (key, direction) pairs specifying the index (as passed to create_index()). It will also contain any other metadata about the indexes, except for the "ns" and "name" keys, which are cleaned. For example:

async def create_x_index():
    print(await db.test.create_index("x", unique=True))
    print(await db.test.index_information())

This prints:

'x_1'
{'_id_': {'key': [('_id', 1)]},
 'x_1': {'unique': True, 'key': [('x', 1)]}}

Changed in version 3.0: Added comment parameter.

Changed in version 1.2: Added session parameter.

coroutine insert_many(documents: Iterable[_DocumentType | RawBSONDocument], ordered: bool = True, bypass_document_validation: bool = False, session: ClientSession | None = None, comment: Any | None = None) InsertManyResult

Insert an iterable of documents.

async def insert_2_docs():
    result = db.test.insert_many([{'x': i} for i in range(2)])
    result.inserted_ids

This prints something like:

[ObjectId('54f113fffba522406c9cc20e'), ObjectId('54f113fffba522406c9cc20f')]
Parameters:
  • documents: A iterable of documents to insert.

  • ordered (optional): If True (the default) documents will be inserted on the server serially, in the order provided. If an error occurs all remaining inserts are aborted. If False, documents will be inserted on the server in arbitrary order, possibly in parallel, and all document inserts will be attempted.

  • bypass_document_validation: (optional) If True, allows the write to opt-out of document level validation. Default is False.

  • session (optional): a ClientSession, created with start_session().

  • comment (optional): A user-provided comment to attach to this command.

Returns:

An instance of InsertManyResult.

Note

bypass_document_validation requires server version >= 3.2

Changed in version 3.0: Added comment parameter.

Changed in version 1.2: Added session parameter.

coroutine insert_one(document: _DocumentType | RawBSONDocument, bypass_document_validation: bool = False, session: ClientSession | None = None, comment: Any | None = None) InsertOneResult

Insert a single document.

async def insert_x():
    result = await db.test.insert_one({'x': 1})
    print(result.inserted_id)

This code outputs the new document’s _id:

ObjectId('54f112defba522406c9cc208')
Parameters:
  • document: The document to insert. Must be a mutable mapping type. If the document does not have an _id field one will be added automatically.

  • bypass_document_validation: (optional) If True, allows the write to opt-out of document level validation. Default is False.

  • session (optional): a ClientSession, created with start_session().

  • comment (optional): A user-provided comment to attach to this command.

Returns:

Note

bypass_document_validation requires server version >= 3.2

Changed in version 3.0: Added comment parameter.

Changed in version 1.2: Added session parameter.

list_indexes(session=None, **kwargs)

Get a cursor over the index documents for this collection.

async def print_indexes():
    async for index in db.test.list_indexes():
        print(index)

If the only index is the default index on _id, this might print:

SON([('v', 1), ('key', SON([('_id', 1)])), ('name', '_id_')])
list_search_indexes(*args, **kwargs)

Return a cursor over search indexes for the current collection.

coroutine options(session: ClientSession | None = None, comment: Any | None = None) MutableMapping[str, Any]

Get the options set on this collection.

Returns a dictionary of options and their values - see create_collection() for more information on the possible options. Returns an empty dictionary if the collection has not been created yet.

Parameters:
  • session (optional): a ClientSession.

  • comment (optional): A user-provided comment to attach to this command.

coroutine rename(new_name: str, session: ClientSession | None = None, comment: Any | None = None, **kwargs: Any) MutableMapping[str, Any]

Rename this collection.

If operating in auth mode, client must be authorized as an admin to perform this operation. Raises TypeError if new_name is not an instance of str. Raises InvalidName if new_name is not a valid collection name.

Parameters:
  • new_name: new name for this collection

  • session (optional): a ClientSession.

  • comment (optional): A user-provided comment to attach to this command.

  • **kwargs (optional): additional arguments to the rename command may be passed as keyword arguments to this helper method (i.e. dropTarget=True)

Note

The write_concern of this collection is automatically applied to this operation.

coroutine replace_one(filter: Mapping[str, Any], replacement: Mapping[str, Any], upsert: bool = False, bypass_document_validation: bool = False, collation: _CollationIn | None = None, hint: _IndexKeyHint | None = None, session: ClientSession | None = None, let: Mapping[str, Any] | None = None, comment: Any | None = None) UpdateResult

Replace a single document matching the filter.

Say our collection has one document:

{'x': 1, '_id': ObjectId('54f4c5befba5220aa4d6dee7')}

Then to replace it with another:

async def_replace_x_with_y():
    result = await db.test.replace_one({'x': 1}, {'y': 1})
    print('matched %d, modified %d' %
        (result.matched_count, result.modified_count))

    print('collection:')
    async for doc in db.test.find():
        print(doc)

This prints:

matched 1, modified 1
collection:
{'y': 1, '_id': ObjectId('54f4c5befba5220aa4d6dee7')}

The upsert option can be used to insert a new document if a matching document does not exist:

async def_replace_or_upsert():
    result = await db.test.replace_one({'x': 1}, {'x': 1}, True)
    print('matched %d, modified %d, upserted_id %r' %
        (result.matched_count, result.modified_count, result.upserted_id))

    print('collection:')
    async for doc in db.test.find():
        print(doc)

This prints:

matched 1, modified 1, upserted_id ObjectId('54f11e5c8891e756a6e1abd4')
collection:
{'y': 1, '_id': ObjectId('54f4c5befba5220aa4d6dee7')}
Parameters:
  • filter: A query that matches the document to replace.

  • replacement: The new document.

  • upsert (optional): If True, perform an insert if no documents match the filter.

  • bypass_document_validation: (optional) If True, allows the write to opt-out of document level validation. Default is False.

  • collation (optional): An instance of Collation.

  • hint (optional): An index to use to support the query predicate specified either by its string name, or in the same format as passed to create_index() (e.g. [('field', ASCENDING)]). This option is only supported on MongoDB 4.2 and above.

  • session (optional): a ClientSession, created with start_session().

  • let (optional): Map of parameter names and values. Values must be constant or closed expressions that do not reference document fields. Parameters can then be accessed as variables in an aggregate expression context (e.g. “$$var”).

  • comment (optional): A user-provided comment to attach to this command.

Returns:

Note

bypass_document_validation requires server version >= 3.2

Changed in version 3.0: Added let and comment parameters.

Changed in version 2.2: Added hint parameter.

Changed in version 1.2: Added session parameter.

coroutine update_many(filter: Mapping[str, Any], update: Mapping[str, Any] | _Pipeline, upsert: bool = False, array_filters: Sequence[Mapping[str, Any]] | None = None, bypass_document_validation: bool | None = None, collation: _CollationIn | None = None, hint: _IndexKeyHint | None = None, session: ClientSession | None = None, let: Mapping[str, Any] | None = None, comment: Any | None = None) UpdateResult

Update one or more documents that match the filter.

Say our collection has 3 documents:

{'x': 1, '_id': 0}
{'x': 1, '_id': 1}
{'x': 1, '_id': 2}

We can add 3 to each “x” field:

async def add_3_to_x():
  result = await db.test.update_many({'x': 1}, {'$inc': {'x': 3}})
  print('matched %d, modified %d' %
        (result.matched_count, result.modified_count))

  print('collection:')
  async for doc in db.test.find():
      print(doc)

This prints:

matched 3, modified 3
collection:
{'x': 4, '_id': 0}
{'x': 4, '_id': 1}
{'x': 4, '_id': 2}
Parameters:
  • filter: A query that matches the documents to update.

  • update: The modifications to apply.

  • upsert (optional): If True, perform an insert if no documents match the filter.

  • bypass_document_validation (optional): If True, allows the write to opt-out of document level validation. Default is False.

  • collation (optional): An instance of Collation.

  • array_filters (optional): A list of filters specifying which array elements an update should apply. Requires MongoDB 3.6+.

  • hint (optional): An index to use to support the query predicate specified either by its string name, or in the same format as passed to create_index() (e.g. [('field', ASCENDING)]). This option is only supported on MongoDB 4.2 and above.

  • session (optional): a ClientSession, created with start_session().

  • let (optional): Map of parameter names and values. Values must be constant or closed expressions that do not reference document fields. Parameters can then be accessed as variables in an aggregate expression context (e.g. “$$var”).

  • comment (optional): A user-provided comment to attach to this command.

Returns:

Note

bypass_document_validation requires server version >= 3.2

Changed in version 3.0: Added let and comment parameters.

Changed in version 2.2: Added hint parameter.

Changed in version 1.2: Added array_filters and session parameters.

coroutine update_one(filter: Mapping[str, Any], update: Mapping[str, Any] | _Pipeline, upsert: bool = False, bypass_document_validation: bool = False, collation: _CollationIn | None = None, array_filters: Sequence[Mapping[str, Any]] | None = None, hint: _IndexKeyHint | None = None, session: ClientSession | None = None, let: Mapping[str, Any] | None = None, comment: Any | None = None) UpdateResult

Update a single document matching the filter.

Say our collection has 3 documents:

{'x': 1, '_id': 0}
{'x': 1, '_id': 1}
{'x': 1, '_id': 2}

We can add 3 to the “x” field of one of the documents:

async def add_3_to_x():
  result = await db.test.update_one({'x': 1}, {'$inc': {'x': 3}})
  print('matched %d, modified %d' %
        (result.matched_count, result.modified_count))

  print('collection:')
  async for doc in db.test.find():
      print(doc)

This prints:

matched 1, modified 1
collection:
{'x': 4, '_id': 0}
{'x': 1, '_id': 1}
{'x': 1, '_id': 2}
Parameters:
  • filter: A query that matches the document to update.

  • update: The modifications to apply.

  • upsert (optional): If True, perform an insert if no documents match the filter.

  • bypass_document_validation: (optional) If True, allows the write to opt-out of document level validation. Default is False.

  • collation (optional): An instance of Collation.

  • array_filters (optional): A list of filters specifying which array elements an update should apply. Requires MongoDB 3.6+.

  • hint (optional): An index to use to support the query predicate specified either by its string name, or in the same format as passed to create_index() (e.g. [('field', ASCENDING)]). This option is only supported on MongoDB 4.2 and above.

  • session (optional): a ClientSession, created with start_session().

  • let (optional): Map of parameter names and values. Values must be constant or closed expressions that do not reference document fields. Parameters can then be accessed as variables in an aggregate expression context (e.g. “$$var”).

  • comment (optional): A user-provided comment to attach to this command.

Returns:

Note

bypass_document_validation requires server version >= 3.2

Changed in version 3.0: Added let and comment parameters.

Changed in version 2.2: Added hint parameter.

Changed in version 1.2: Added array_filters and session parameters.

coroutine update_search_index(name: str, definition: Mapping[str, Any], session: ClientSession | None = None, comment: Any | None = None, **kwargs: Any) None

Update a search index by replacing the existing index definition with the provided definition.

Parameters:
  • name: The name of the search index to be updated.

  • definition: The new search index definition.

  • session (optional): a ClientSession.

  • comment (optional): A user-provided comment to attach to this command.

  • **kwargs (optional): optional arguments to the updateSearchIndexes command (like maxTimeMS) can be passed as keyword arguments.

Note

requires a MongoDB server version 7.0+ Atlas cluster.

watch(pipeline=None, full_document=None, resume_after=None, max_await_time_ms=None, batch_size=None, collation=None, start_at_operation_time=None, session=None, start_after=None, comment=None, full_document_before_change=None, show_expanded_events=None)

Watch changes on this collection.

Performs an aggregation with an implicit initial $changeStream stage and returns a MotorChangeStream cursor which iterates over changes on this collection.

Introduced in MongoDB 3.6.

A change stream continues waiting indefinitely for matching change events. Code like the following allows a program to cancel the change stream and exit.

change_stream = None


async def watch_collection():
    global change_stream

    # Using the change stream in an "async with" block
    # ensures it is canceled promptly if your code breaks
    # from the loop or throws an exception.
    async with db.collection.watch() as change_stream:
        async for change in change_stream:
            print(change)


# Tornado
from tornado.ioloop import IOLoop


def main():
    loop = IOLoop.current()
    # Start watching collection for changes.
    try:
        loop.run_sync(watch_collection)
    except KeyboardInterrupt:
        if change_stream:
            loop.run_sync(change_stream.close)


# asyncio
try:
    asyncio.run(watch_collection())
except KeyboardInterrupt:
    pass

The MotorChangeStream async iterable blocks until the next change document is returned or an error is raised. If the next() method encounters a network error when retrieving a batch from the server, it will automatically attempt to recreate the cursor such that no change events are missed. Any error encountered during the resume attempt indicates there may be an outage and will be raised.

try:
    pipeline = [{"$match": {"operationType": "insert"}}]
    async with db.collection.watch(pipeline) as stream:
        async for change in stream:
            print(change)
except pymongo.errors.PyMongoError:
    # The ChangeStream encountered an unrecoverable error or the
    # resume attempt failed to recreate the cursor.
    logging.error("...")

For a precise description of the resume process see the change streams specification.

Parameters:
  • pipeline (optional): A list of aggregation pipeline stages to append to an initial $changeStream stage. Not all pipeline stages are valid after a $changeStream stage, see the MongoDB documentation on change streams for the supported stages.

  • full_document (optional): The fullDocument option to pass to the $changeStream stage. Allowed values: ‘updateLookup’. When set to ‘updateLookup’, the change notification for partial updates will include both a delta describing the changes to the document, as well as a copy of the entire document that was changed from some time after the change occurred.

  • resume_after (optional): A resume token. If provided, the change stream will start returning changes that occur directly after the operation specified in the resume token. A resume token is the _id value of a change document.

  • max_await_time_ms (optional): The maximum time in milliseconds for the server to wait for changes before responding to a getMore operation.

  • batch_size (optional): The maximum number of documents to return per batch.

  • collation (optional): The Collation to use for the aggregation.

  • session (optional): a ClientSession.

  • start_after (optional): The same as resume_after except that start_after can resume notifications after an invalidate event. This option and resume_after are mutually exclusive.

  • comment (optional): A user-provided comment to attach to this command.

  • full_document_before_change: Allowed values: whenAvailable and required. Change events may now result in a fullDocumentBeforeChange response field.

  • show_expanded_events (optional): Include expanded events such as DDL events like dropIndexes.

Returns:

A MotorChangeStream.

See the Tornado Change Stream Example.

Changed in version 3.2: Added show_expanded_events parameter.

Changed in version 3.1: Added full_document_before_change parameter.

Changed in version 3.0: Added comment parameter.

Changed in version 2.1: Added the start_after parameter.

Added in version 1.2.

See also

The MongoDB documentation on

changeStreams

with_options(codec_options: bson.CodecOptions[_DocumentTypeArg] | None = None, read_preference: _ServerMode | None = None, write_concern: WriteConcern | None = None, read_concern: ReadConcern | None = None) Collection[_DocumentType]

Get a clone of this collection changing the specified settings.

>>> coll1.read_preference
Primary()
>>> from pymongo import ReadPreference
>>> coll2 = coll1.with_options(read_preference=ReadPreference.SECONDARY)
>>> coll1.read_preference
Primary()
>>> coll2.read_preference
Secondary(tag_sets=None)
Parameters:
  • codec_options (optional): An instance of CodecOptions. If None (the default) the codec_options of this Collection is used.

  • read_preference (optional): The read preference to use. If None (the default) the read_preference of this Collection is used. See read_preferences for options.

  • write_concern (optional): An instance of WriteConcern. If None (the default) the write_concern of this Collection is used.

  • read_concern (optional): An instance of ReadConcern. If None (the default) the read_concern of this Collection is used.

property codec_options

Read only access to the CodecOptions of this instance.

property full_name

The full name of this Collection.

The full name is of the form database_name.collection_name.

property name

The name of this Collection.

property read_concern

Read only access to the ReadConcern of this instance.

Added in version 3.2.

property read_preference

Read only access to the read preference of this instance.

Changed in version 3.0: The read_preference attribute is now read only.

property write_concern

Read only access to the WriteConcern of this instance.

Changed in version 3.0: The write_concern attribute is now read only.