AsyncIOMotorCollection

class motor.motor_asyncio.AsyncIOMotorCollection(database, name, _delegate=None)
c[name] || c.name

Get the name sub-collection of AsyncIOMotorCollection c.

Raises InvalidName if an invalid collection name is used.

database

The AsyncIOMotorDatabase that this AsyncIOMotorCollection is a part of.

coroutine create_index(self, keys, **kwargs)

Creates an index on this collection.

Takes either a single key or a list of (key, direction) pairs. The key(s) must be an instance of basestring (str in python 3), and the direction(s) must be one of (ASCENDING, DESCENDING, GEO2D, GEOHAYSTACK, GEOSPHERE, HASHED, TEXT).

To create a single key ascending index on the key 'mike' we just use a string argument:

await my_collection.create_index("mike")

For a compound index on 'mike' descending and 'eliot' ascending we need to use a list of tuples:

await my_collection.create_index([("mike", pymongo.DESCENDING),
                                  ("eliot", pymongo.ASCENDING)])

All optional index creation parameters should be passed as keyword arguments to this method. For example:

await my_collection.create_index([("mike", pymongo.DESCENDING)],
                                 background=True)

Valid options include, but are not limited to:

  • name: custom name to use for this index - if none is given, a name will be generated.
  • unique: if True creates a uniqueness constraint on the index.
  • background: if True this index should be created in the background.
  • sparse: if True, omit from the index any documents that lack the indexed field.
  • bucketSize: for use with geoHaystack indexes. Number of documents to group together within a certain proximity to a given longitude and latitude.
  • min: minimum value for keys in a GEO2D index.
  • max: maximum value for keys in a GEO2D index.
  • expireAfterSeconds: <int> Used to create an expiring (TTL) collection. MongoDB will automatically delete documents from this collection after <int> seconds. The indexed field must be a UTC datetime or the data will not expire.
  • partialFilterExpression: A document that specifies a filter for a partial index.
  • collation (optional): An instance of Collation. This option is only supported on MongoDB 3.4 and above.

See the MongoDB documentation for a full list of supported options by server version.

Warning

dropDups is not supported by MongoDB 3.0 or newer. The option is silently ignored by the server and unique index builds using the option will fail if a duplicate value is detected.

Note

partialFilterExpression requires server version >= 3.2

Note

The write_concern of this collection is automatically applied to this operation when using MongoDB >= 3.4.

Parameters:
  • keys: a single key or a list of (key, direction) pairs specifying the index to create
  • **kwargs (optional): any additional index creation options (see the above list) should be passed as keyword arguments

See general MongoDB documentation

indexes

coroutine inline_map_reduce(self, map, reduce, full_response=False, **kwargs)

Perform an inline map/reduce operation on this collection.

Perform the map/reduce operation on the server in RAM. A result collection is not created. The result set is returned as a list of documents.

If full_response is False (default) returns the result documents in a list. Otherwise, returns the full response from the server to the map reduce command.

The inline_map_reduce() method obeys the read_preference of this Collection.

Parameters:
  • map: map function (as a JavaScript string)

  • reduce: reduce function (as a JavaScript string)

  • full_response (optional): if True, return full response to this command - otherwise just return the result collection

  • **kwargs (optional): additional arguments to the map reduce command may be passed as keyword arguments to this helper method, e.g.:

    await db.test.inline_map_reduce(map, reduce, limit=2)
    

See general MongoDB documentation

mapreduce

aggregate(pipeline, **kwargs)

Execute an aggregation pipeline on this collection.

The aggregation can be run on a secondary if the client is connected to a replica set and its read_preference is not PRIMARY.

Parameters:
  • pipeline: a single command or list of aggregation commands
  • **kwargs: send arbitrary parameters to the aggregate command

Returns a MotorCommandCursor that can be iterated like a cursor from find():

pipeline = [{'$project': {'name': {'$toUpper': '$name'}}}]
cursor = collection.aggregate(pipeline)
while (yield cursor.fetch_next):
    doc = cursor.next_object()
    print(doc)

In Python 3.5 and newer, aggregation cursors can be iterated elegantly in native coroutines with async for:

async def f():
    async for doc in collection.aggregate(pipeline):
        print(doc)

Changed in version 1.0: aggregate() now always returns a cursor.

Changed in version 0.5: aggregate() now returns a cursor by default, and the cursor is returned immediately without a yield. See aggregation changes in Motor 0.5.

Changed in version 0.2: Added cursor support.

coroutine bulk_write(requests, ordered=True, bypass_document_validation=False)

Send a batch of write operations to the server.

Requests are passed as a list of write operation instances ( InsertOne, UpdateOne, UpdateMany, ReplaceOne, DeleteOne, or DeleteMany).

>>> for doc in db.test.find({}):
...     print(doc)
...
{u'x': 1, u'_id': ObjectId('54f62e60fba5226811f634ef')}
{u'x': 1, u'_id': ObjectId('54f62e60fba5226811f634f0')}
>>> # DeleteMany, UpdateOne, and UpdateMany are also available.
...
>>> from pymongo import InsertOne, DeleteOne, ReplaceOne
>>> requests = [InsertOne({'y': 1}), DeleteOne({'x': 1}),
...             ReplaceOne({'w': 1}, {'z': 1}, upsert=True)]
>>> result = db.test.bulk_write(requests)
>>> result.inserted_count
1
>>> result.deleted_count
1
>>> result.modified_count
0
>>> result.upserted_ids
{2: ObjectId('54f62ee28891e756a6e1abd5')}
>>> for doc in db.test.find({}):
...     print(doc)
...
{u'x': 1, u'_id': ObjectId('54f62e60fba5226811f634f0')}
{u'y': 1, u'_id': ObjectId('54f62ee2fba5226811f634f1')}
{u'z': 1, u'_id': ObjectId('54f62ee28891e756a6e1abd5')}
Parameters:
  • requests: A list of write operations (see examples above).
  • ordered (optional): If True (the default) requests will be performed on the server serially, in the order provided. If an error occurs all remaining operations are aborted. If False requests will be performed on the server in arbitrary order, possibly in parallel, and all operations will be attempted.
  • bypass_document_validation: (optional) If True, allows the write to opt-out of document level validation. Default is False.
Returns:

An instance of BulkWriteResult.

Note

bypass_document_validation requires server version >= 3.2

coroutine count(filter=None, **kwargs)

Get the number of documents in this collection.

All optional count parameters should be passed as keyword arguments to this method. Valid options include:

  • hint (string or list of tuples): The index to use. Specify either the index name as a string or the index specification as a list of tuples (e.g. [(‘a’, pymongo.ASCENDING), (‘b’, pymongo.ASCENDING)]).
  • limit (int): The maximum number of documents to count.
  • skip (int): The number of matching documents to skip before returning results.
  • maxTimeMS (int): The maximum amount of time to allow the count command to run, in milliseconds.
  • collation (optional): An instance of Collation. This option is only supported on MongoDB 3.4 and above.

The count() method obeys the read_preference of this Collection.

Parameters:
  • filter (optional): A query document that selects which documents to count in the collection.
  • **kwargs (optional): See list of options above.
coroutine create_indexes(indexes)

Create one or more indexes on this collection.

>>> from pymongo import IndexModel, ASCENDING, DESCENDING
>>> index1 = IndexModel([("hello", DESCENDING),
...                      ("world", ASCENDING)], name="hello_world")
>>> index2 = IndexModel([("goodbye", DESCENDING)])
>>> db.test.create_indexes([index1, index2])
["hello_world"]
Parameters:

Note

create_indexes uses the createIndexes command introduced in MongoDB 2.6 and cannot be used with earlier versions.

Note

The write_concern of this collection is automatically applied to this operation when using MongoDB >= 3.4.

coroutine delete_many(filter, collation=None)

Delete one or more documents matching the filter.

>>> db.test.count({'x': 1})
3
>>> result = db.test.delete_many({'x': 1})
>>> result.deleted_count
3
>>> db.test.count({'x': 1})
0
Parameters:
  • filter: A query that matches the documents to delete.
  • collation (optional): An instance of Collation. This option is only supported on MongoDB 3.4 and above.
Returns:
coroutine delete_one(filter, collation=None)

Delete a single document matching the filter.

>>> db.test.count({'x': 1})
3
>>> result = db.test.delete_one({'x': 1})
>>> result.deleted_count
1
>>> db.test.count({'x': 1})
2
Parameters:
  • filter: A query that matches the document to delete.
  • collation (optional): An instance of Collation. This option is only supported on MongoDB 3.4 and above.
Returns:
coroutine distinct(key, filter=None, **kwargs)

Get a list of distinct values for key among all documents in this collection.

Raises TypeError if key is not an instance of basestring (str in python 3).

All optional distinct parameters should be passed as keyword arguments to this method. Valid options include:

  • maxTimeMS (int): The maximum amount of time to allow the count command to run, in milliseconds.
  • collation (optional): An instance of Collation. This option is only supported on MongoDB 3.4 and above.

The distinct() method obeys the read_preference of this Collection.

Parameters:
  • key: name of the field for which we want to get the distinct values
  • filter (optional): A query document that specifies the documents from which to retrieve the distinct values.
  • **kwargs (optional): See list of options above.
coroutine drop()

Alias for drop_collection().

The following two calls are equivalent:

>>> db.foo.drop()
>>> db.drop_collection("foo")
coroutine drop_index(index_or_name)

Drops the specified index on this collection.

Can be used on non-existant collections or collections with no indexes. Raises OperationFailure on an error (e.g. trying to drop an index that does not exist). index_or_name can be either an index name (as returned by create_index), or an index specifier (as passed to create_index). An index specifier should be a list of (key, direction) pairs. Raises TypeError if index is not an instance of (str, unicode, list).

Warning

if a custom name was used on index creation (by passing the name parameter to create_index() or ensure_index()) the index must be dropped by name.

Parameters:
  • index_or_name: index (or name of index) to drop

Note

The write_concern of this collection is automatically applied to this operation when using MongoDB >= 3.4.

coroutine drop_indexes()

Drops all indexes on this collection.

Can be used on non-existant collections or collections with no indexes. Raises OperationFailure on an error.

Note

The write_concern of this collection is automatically applied to this operation when using MongoDB >= 3.4.

coroutine ensure_index(key_or_list, cache_for=300, **kwargs)

DEPRECATED - Ensures that an index exists on this collection.

find(*args, **kwargs)

Create a MotorCursor. Same parameters as for PyMongo’s find().

Note that find does not take a callback parameter, nor does it return a Future, because find merely creates a MotorCursor without performing any operations on the server. MotorCursor methods such as to_list() or count() perform actual operations.

coroutine find_and_modify(query={}, update=None, upsert=False, sort=None, full_response=False, manipulate=False, **kwargs)

Update and return an object.

DEPRECATED - Use find_one_and_delete(), find_one_and_replace(), or find_one_and_update() instead.

coroutine find_one(filter=None, *args, **kwargs)

Get a single document from the database.

All arguments to find() are also valid arguments for find_one(), although any limit argument will be ignored. Returns a single document, or None if no matching document is found.

The find_one() method obeys the read_preference of this Collection.

Parameters:
  • filter (optional): a dictionary specifying the query to be performed OR any other type to be used as the value for a query for "_id".

  • *args (optional): any additional positional arguments are the same as the arguments to find().

  • **kwargs (optional): any additional keyword arguments are the same as the arguments to find().

  • max_time_ms (optional): a value for max_time_ms may be specified as part of **kwargs, e.g.

    >>> find_one(max_time_ms=100)
    
coroutine find_one_and_delete(filter, projection=None, sort=None, **kwargs)

Finds a single document and deletes it, returning the document.

>>> db.test.count({'x': 1})
2
>>> db.test.find_one_and_delete({'x': 1})
{u'x': 1, u'_id': ObjectId('54f4e12bfba5220aa4d6dee8')}
>>> db.test.count({'x': 1})
1

If multiple documents match filter, a sort can be applied.

>>> for doc in db.test.find({'x': 1}):
...     print(doc)
...
{u'x': 1, u'_id': 0}
{u'x': 1, u'_id': 1}
{u'x': 1, u'_id': 2}
>>> db.test.find_one_and_delete(
...     {'x': 1}, sort=[('_id', pymongo.DESCENDING)])
{u'x': 1, u'_id': 2}

The projection option can be used to limit the fields returned.

>>> db.test.find_one_and_delete({'x': 1}, projection={'_id': False})
{u'x': 1}
Parameters:
  • filter: A query that matches the document to delete.
  • projection (optional): a list of field names that should be returned in the result document or a mapping specifying the fields to include or exclude. If projection is a list “_id” will always be returned. Use a mapping to exclude fields from the result (e.g. projection={‘_id’: False}).
  • sort (optional): a list of (key, direction) pairs specifying the sort order for the query. If multiple documents match the query, they are sorted and the first is deleted.
  • **kwargs (optional): additional command arguments can be passed as keyword arguments (for example maxTimeMS can be used with recent server versions).

Warning

Starting in PyMongo 3.2, this command uses the WriteConcern of this Collection when connected to MongoDB >= 3.2. Note that using an elevated write concern with this command may be slower compared to using the default write concern.

coroutine find_one_and_replace(filter, replacement, projection=None, sort=None, upsert=False, return_document=False, **kwargs)

Finds a single document and replaces it, returning either the original or the replaced document.

The find_one_and_replace() method differs from find_one_and_update() by replacing the document matched by filter, rather than modifying the existing document.

>>> for doc in db.test.find({}):
...     print(doc)
...
{u'x': 1, u'_id': 0}
{u'x': 1, u'_id': 1}
{u'x': 1, u'_id': 2}
>>> db.test.find_one_and_replace({'x': 1}, {'y': 1})
{u'x': 1, u'_id': 0}
>>> for doc in db.test.find({}):
...     print(doc)
...
{u'y': 1, u'_id': 0}
{u'x': 1, u'_id': 1}
{u'x': 1, u'_id': 2}
Parameters:
  • filter: A query that matches the document to replace.
  • replacement: The replacement document.
  • projection (optional): A list of field names that should be returned in the result document or a mapping specifying the fields to include or exclude. If projection is a list “_id” will always be returned. Use a mapping to exclude fields from the result (e.g. projection={‘_id’: False}).
  • sort (optional): a list of (key, direction) pairs specifying the sort order for the query. If multiple documents match the query, they are sorted and the first is replaced.
  • upsert (optional): When True, inserts a new document if no document matches the query. Defaults to False.
  • return_document: If ReturnDocument.BEFORE (the default), returns the original document before it was replaced, or None if no document matches. If ReturnDocument.AFTER, returns the replaced or inserted document.
  • **kwargs (optional): additional command arguments can be passed as keyword arguments (for example maxTimeMS can be used with recent server versions).

Warning

Starting in PyMongo 3.2, this command uses the WriteConcern of this Collection when connected to MongoDB >= 3.2. Note that using an elevated write concern with this command may be slower compared to using the default write concern.

coroutine find_one_and_update(filter, update, projection=None, sort=None, upsert=False, return_document=False, **kwargs)

Finds a single document and updates it, returning either the original or the updated document.

>>> db.test.find_one_and_update(
...    {'_id': 665}, {'$inc': {'count': 1}, '$set': {'done': True}})
{u'_id': 665, u'done': False, u'count': 25}}

By default find_one_and_update() returns the original version of the document before the update was applied. To return the updated version of the document instead, use the return_document option.

>>> from pymongo import ReturnDocument
>>> db.example.find_one_and_update(
...     {'_id': 'userid'},
...     {'$inc': {'seq': 1}},
...     return_document=ReturnDocument.AFTER)
{u'_id': u'userid', u'seq': 1}

You can limit the fields returned with the projection option.

>>> db.example.find_one_and_update(
...     {'_id': 'userid'},
...     {'$inc': {'seq': 1}},
...     projection={'seq': True, '_id': False},
...     return_document=ReturnDocument.AFTER)
{u'seq': 2}

The upsert option can be used to create the document if it doesn’t already exist.

>>> db.example.delete_many({}).deleted_count
1
>>> db.example.find_one_and_update(
...     {'_id': 'userid'},
...     {'$inc': {'seq': 1}},
...     projection={'seq': True, '_id': False},
...     upsert=True,
...     return_document=ReturnDocument.AFTER)
{u'seq': 1}

If multiple documents match filter, a sort can be applied.

>>> for doc in db.test.find({'done': True}):
...     print(doc)
...
{u'_id': 665, u'done': True, u'result': {u'count': 26}}
{u'_id': 701, u'done': True, u'result': {u'count': 17}}
>>> db.test.find_one_and_update(
...     {'done': True},
...     {'$set': {'final': True}},
...     sort=[('_id', pymongo.DESCENDING)])
{u'_id': 701, u'done': True, u'result': {u'count': 17}}
Parameters:
  • filter: A query that matches the document to update.
  • update: The update operations to apply.
  • projection (optional): A list of field names that should be returned in the result document or a mapping specifying the fields to include or exclude. If projection is a list “_id” will always be returned. Use a dict to exclude fields from the result (e.g. projection={‘_id’: False}).
  • sort (optional): a list of (key, direction) pairs specifying the sort order for the query. If multiple documents match the query, they are sorted and the first is updated.
  • upsert (optional): When True, inserts a new document if no document matches the query. Defaults to False.
  • return_document: If ReturnDocument.BEFORE (the default), returns the original document before it was updated, or None if no document matches. If ReturnDocument.AFTER, returns the updated or inserted document.
  • **kwargs (optional): additional command arguments can be passed as keyword arguments (for example maxTimeMS can be used with recent server versions).

Warning

Starting in PyMongo 3.2, this command uses the WriteConcern of this Collection when connected to MongoDB >= 3.2. Note that using an elevated write concern with this command may be slower compared to using the default write concern.

coroutine group(key, condition, initial, reduce, finalize=None, **kwargs)

Perform a query similar to an SQL group by operation.

Returns an array of grouped items.

The key parameter can be:

  • None to use the entire document as a key.
  • A list of keys (each a basestring (str in python 3)) to group by.
  • A basestring (str in python 3), or Code instance containing a JavaScript function to be applied to each document, returning the key to group by.

The group() method obeys the read_preference of this Collection.

Parameters:
  • key: fields to group by (see above description)
  • condition: specification of rows to be considered (as a find() query specification)
  • initial: initial value of the aggregation counter object
  • reduce: aggregation function as a JavaScript string
  • finalize: function to be called on each object in output list.
  • **kwargs (optional): additional arguments to the group command may be passed as keyword arguments to this helper method
coroutine index_information()

Get information on this collection’s indexes.

Returns a dictionary where the keys are index names (as returned by create_index()) and the values are dictionaries containing information about each index. The dictionary is guaranteed to contain at least a single key, "key" which is a list of (key, direction) pairs specifying the index (as passed to create_index()). It will also contain any other metadata about the indexes, except for the "ns" and "name" keys, which are cleaned. Example output might look like this:

>>> db.test.ensure_index("x", unique=True)
u'x_1'
>>> db.test.index_information()
{u'_id_': {u'key': [(u'_id', 1)]},
 u'x_1': {u'unique': True, u'key': [(u'x', 1)]}}
initialize_ordered_bulk_op(bypass_document_validation=False)

Initialize an ordered batch of write operations.

Operations will be performed on the server serially, in the order provided. If an error occurs all remaining operations are aborted.

Parameters:
  • bypass_document_validation: (optional) If True, allows the write to opt-out of document level validation. Default is False.

Returns a MotorBulkOperationBuilder instance.

See Ordered Bulk Write Operations for examples.

Changed in version 1.0: Added bypass_document_validation support

New in version 0.2.

initialize_unordered_bulk_op(bypass_document_validation=False)

Initialize an unordered batch of write operations.

Operations will be performed on the server in arbitrary order, possibly in parallel. All operations will be attempted.

Parameters:
  • bypass_document_validation: (optional) If True, allows the write to opt-out of document level validation. Default is False.

Returns a MotorBulkOperationBuilder instance.

See Unordered Bulk Write Operations for examples.

Changed in version 1.0: Added bypass_document_validation support

New in version 0.2.

coroutine insert(doc_or_docs, manipulate=True, check_keys=True, continue_on_error=False, **kwargs)

Insert a document(s) into this collection.

DEPRECATED - Use insert_one() or insert_many() instead.

coroutine insert_many(documents, ordered=True, bypass_document_validation=False)

Insert an iterable of documents.

>>> db.test.count()
0
>>> result = db.test.insert_many([{'x': i} for i in range(2)])
>>> result.inserted_ids
[ObjectId('54f113fffba522406c9cc20e'), ObjectId('54f113fffba522406c9cc20f')]
>>> db.test.count()
2
Parameters:
  • documents: A iterable of documents to insert.
  • ordered (optional): If True (the default) documents will be inserted on the server serially, in the order provided. If an error occurs all remaining inserts are aborted. If False, documents will be inserted on the server in arbitrary order, possibly in parallel, and all document inserts will be attempted.
  • bypass_document_validation: (optional) If True, allows the write to opt-out of document level validation. Default is False.
Returns:

An instance of InsertManyResult.

Note

bypass_document_validation requires server version >= 3.2

coroutine insert_one(document, bypass_document_validation=False)

Insert a single document.

>>> db.test.count({'x': 1})
0
>>> result = db.test.insert_one({'x': 1})
>>> result.inserted_id
ObjectId('54f112defba522406c9cc208')
>>> db.test.find_one({'x': 1})
{u'x': 1, u'_id': ObjectId('54f112defba522406c9cc208')}
Parameters:
  • document: The document to insert. Must be a mutable mapping type. If the document does not have an _id field one will be added automatically.
  • bypass_document_validation: (optional) If True, allows the write to opt-out of document level validation. Default is False.
Returns:

Note

bypass_document_validation requires server version >= 3.2

coroutine list_indexes()

Get a cursor over the index documents for this collection.

>>> for index in db.test.list_indexes():
...     print(index)
...
SON([(u'v', 1), (u'key', SON([(u'_id', 1)])),
     (u'name', u'_id_'), (u'ns', u'test.test')])
Returns:An instance of CommandCursor.
coroutine map_reduce(map, reduce, out, full_response=False, **kwargs)

Perform a map/reduce operation on this collection.

If full_response is False (default) returns a MotorCollection instance containing the results of the operation. Otherwise, returns the full response from the server to the map reduce command.

Parameters:
  • map: map function (as a JavaScript string)

  • reduce: reduce function (as a JavaScript string)

  • out: output collection name or out object (dict). See the map reduce command documentation for available options. Note: out options are order sensitive. SON can be used to specify multiple options. e.g. SON([(‘replace’, <collection name>), (‘db’, <database name>)])

  • full_response (optional): if True, return full response to this command - otherwise just return the result collection

  • callback (optional): function taking (result, error), executed when operation completes.

  • **kwargs (optional): additional arguments to the map reduce command may be passed as keyword arguments to this helper method, e.g.:

    result = yield db.test.map_reduce(map, reduce, "myresults", limit=2)
    

If a callback is passed, returns None, else returns a Future.

Note

The map_reduce() method does not obey the read_preference of this MotorCollection. To run mapReduce on a secondary use the inline_map_reduce() method instead.

See general MongoDB documentation

mapreduce

coroutine options()

Get the options set on this collection.

Returns a dictionary of options and their values - see create_collection() for more information on the possible options. Returns an empty dictionary if the collection has not been created yet.

parallel_scan(num_cursors, **kwargs)

Scan this entire collection in parallel.

Returns a list of up to num_cursors cursors that can be iterated concurrently. As long as the collection is not modified during scanning, each document appears once in one of the cursors’ result sets.

For example, to process each document in a collection using some function process_document():

@gen.coroutine
def process_cursor(cursor):
    while (yield cursor.fetch_next):
        process_document(cursor.next_object())

# Get up to 4 cursors.
cursors = yield collection.parallel_scan(4)
yield [process_cursor(cursor) for cursor in cursors]

# All documents have now been processed.

If process_document() is a coroutine, do yield process_document(document).

With a replica set, pass read_preference of SECONDARY_PREFERRED to scan a secondary.

Parameters:
  • num_cursors: the number of cursors to return

Note

Requires server version >= 2.5.5.

coroutine reindex()

Rebuilds all indexes on this collection.

Warning

reindex blocks all other operations (indexes are built in the foreground) and will be slow for large collections.

Note

The write_concern of this collection is automatically applied to this operation when using MongoDB >= 3.4.

coroutine remove(spec_or_id=None, multi=True, **kwargs)

Remove a document(s) from this collection.

DEPRECATED - Use delete_one() or delete_many() instead.

coroutine rename(new_name, **kwargs)

Rename this collection.

If operating in auth mode, client must be authorized as an admin to perform this operation. Raises TypeError if new_name is not an instance of basestring (str in python 3). Raises InvalidName if new_name is not a valid collection name.

Parameters:
  • new_name: new name for this collection
  • **kwargs (optional): additional arguments to the rename command may be passed as keyword arguments to this helper method (i.e. dropTarget=True)

Note

The write_concern of this collection is automatically applied to this operation when using MongoDB >= 3.4.

coroutine replace_one(filter, replacement, upsert=False, bypass_document_validation=False, collation=None)

Replace a single document matching the filter.

>>> for doc in db.test.find({}):
...     print(doc)
...
{u'x': 1, u'_id': ObjectId('54f4c5befba5220aa4d6dee7')}
>>> result = db.test.replace_one({'x': 1}, {'y': 1})
>>> result.matched_count
1
>>> result.modified_count
1
>>> for doc in db.test.find({}):
...     print(doc)
...
{u'y': 1, u'_id': ObjectId('54f4c5befba5220aa4d6dee7')}

The upsert option can be used to insert a new document if a matching document does not exist.

>>> result = db.test.replace_one({'x': 1}, {'x': 1}, True)
>>> result.matched_count
0
>>> result.modified_count
0
>>> result.upserted_id
ObjectId('54f11e5c8891e756a6e1abd4')
>>> db.test.find_one({'x': 1})
{u'x': 1, u'_id': ObjectId('54f11e5c8891e756a6e1abd4')}
Parameters:
  • filter: A query that matches the document to replace.
  • replacement: The new document.
  • upsert (optional): If True, perform an insert if no documents match the filter.
  • bypass_document_validation: (optional) If True, allows the write to opt-out of document level validation. Default is False.
  • collation (optional): An instance of Collation. This option is only supported on MongoDB 3.4 and above.
Returns:

Note

bypass_document_validation requires server version >= 3.2

coroutine save(to_save, manipulate=True, check_keys=True, **kwargs)

Save a document in this collection.

DEPRECATED - Use insert_one() or replace_one() instead.

coroutine update(spec, document, upsert=False, manipulate=False, multi=False, check_keys=True, **kwargs)

Update a document(s) in this collection.

Raises TypeError if either spec or document is not an instance of dict or upsert is not an instance of bool.

Write concern options can be passed as keyword arguments, overriding any global defaults. Valid options include w=<int/string>, wtimeout=<int>, j=<bool>, or fsync=<bool>. See the parameter list below for a detailed explanation of these options.

There are many useful update modifiers which can be used when performing updates. For example, here we use the "$set" modifier to modify a field in a matching document:

>>> @gen.coroutine
... def do_update():
...     result = yield collection.update({'_id': 10},
...                                      {'$set': {'x': 1}})
Parameters:
  • spec: a dict or SON instance specifying elements which must be present for a document to be updated
  • document: a dict or SON instance specifying the document to be used for the update or (in the case of an upsert) insert - see docs on MongoDB update modifiers
  • upsert (optional): perform an upsert if True
  • manipulate (optional): manipulate the document before updating? If True all instances of SONManipulator added to this Database will be applied to the document before performing the update.
  • check_keys (optional): check if keys in document start with ‘$’ or contain ‘.’, raising InvalidName. Only applies to document replacement, not modification through $ operators.
  • safe (optional): DEPRECATED - Use w instead.
  • multi (optional): update all documents that match spec, rather than just the first matching document. The default value for multi is currently False, but this might eventually change to True. It is recommended that you specify this argument explicitly for all update operations in order to prepare your code for that change.
  • w (optional): (integer or string) If this is a replica set, write operations will block until they have been replicated to the specified number or tagged set of servers. w=<int> always includes the replica set primary (e.g. w=3 means write to the primary and wait until replicated to two secondaries). Passing w=0 disables write acknowledgement and all other write concern options.
  • wtimeout (optional): (integer) Used in conjunction with w. Specify a value in milliseconds to control how long to wait for write propagation to complete. If replication does not complete in the given timeframe, a timeout exception is raised.
  • j (optional): If True block until write operations have been committed to the journal. Ignored if the server is running without journaling.
  • fsync (optional): If True force the database to fsync all files before returning. When used with j the server awaits the next group commit before returning.
Returns:
  • A document (dict) describing the effect of the update.

See general MongoDB documentation

update

coroutine update_many(filter, update, upsert=False, bypass_document_validation=False, collation=None)

Update one or more documents that match the filter.

>>> for doc in db.test.find():
...     print(doc)
...
{u'x': 1, u'_id': 0}
{u'x': 1, u'_id': 1}
{u'x': 1, u'_id': 2}
>>> result = db.test.update_many({'x': 1}, {'$inc': {'x': 3}})
>>> result.matched_count
3
>>> result.modified_count
3
>>> for doc in db.test.find():
...     print(doc)
...
{u'x': 4, u'_id': 0}
{u'x': 4, u'_id': 1}
{u'x': 4, u'_id': 2}
Parameters:
  • filter: A query that matches the documents to update.
  • update: The modifications to apply.
  • upsert (optional): If True, perform an insert if no documents match the filter.
  • bypass_document_validation (optional): If True, allows the write to opt-out of document level validation. Default is False.
  • collation (optional): An instance of Collation. This option is only supported on MongoDB 3.4 and above.
Returns:

Note

bypass_document_validation requires server version >= 3.2

coroutine update_one(filter, update, upsert=False, bypass_document_validation=False, collation=None)

Update a single document matching the filter.

>>> for doc in db.test.find():
...     print(doc)
...
{u'x': 1, u'_id': 0}
{u'x': 1, u'_id': 1}
{u'x': 1, u'_id': 2}
>>> result = db.test.update_one({'x': 1}, {'$inc': {'x': 3}})
>>> result.matched_count
1
>>> result.modified_count
1
>>> for doc in db.test.find():
...     print(doc)
...
{u'x': 4, u'_id': 0}
{u'x': 1, u'_id': 1}
{u'x': 1, u'_id': 2}
Parameters:
  • filter: A query that matches the document to update.
  • update: The modifications to apply.
  • upsert (optional): If True, perform an insert if no documents match the filter.
  • bypass_document_validation: (optional) If True, allows the write to opt-out of document level validation. Default is False.
  • collation (optional): An instance of Collation. This option is only supported on MongoDB 3.4 and above.
Returns:

Note

bypass_document_validation requires server version >= 3.2

with_options(codec_options=None, read_preference=None, write_concern=None, read_concern=None)

Get a clone of this collection changing the specified settings.

>>> coll1.read_preference
Primary()
>>> from pymongo import ReadPreference
>>> coll2 = coll1.with_options(read_preference=ReadPreference.SECONDARY)
>>> coll1.read_preference
Primary()
>>> coll2.read_preference
Secondary(tag_sets=None)
Parameters:
  • codec_options (optional): An instance of CodecOptions. If None (the default) the codec_options of this Collection is used.
  • read_preference (optional): The read preference to use. If None (the default) the read_preference of this Collection is used. See read_preferences for options.
  • write_concern (optional): An instance of WriteConcern. If None (the default) the write_concern of this Collection is used.
  • read_concern (optional): An instance of ReadConcern. If None (the default) the read_concern of this Collection is used.
codec_options

Read only access to the CodecOptions of this instance.

full_name

The full name of this Collection.

The full name is of the form database_name.collection_name.

name

The name of this Collection.

read_concern

Read only access to the read concern of this instance.

New in version 3.2.

read_preference

Read only access to the read preference of this instance.

Changed in version 3.0: The read_preference attribute is now read only.

write_concern

Read only access to the WriteConcern of this instance.

Changed in version 3.0: The write_concern attribute is now read only.