AsyncIOMotorCursor

class motor.motor_asyncio.AsyncIOMotorCursor(cursor, collection)

Don’t construct a cursor yourself, but acquire one from methods like MotorCollection.find() or MotorCollection.aggregate().

Note

There is no need to manually close cursors; they are closed by the server after being fully iterated with to_list(), each(), or fetch_next, or automatically closed by the client when the MotorCursor is cleaned up by the garbage collector.

add_option(mask)

Set arbitrary query flags using a bitmask.

To set the tailable flag: cursor.add_option(2)

batch_size(batch_size)

Limits the number of documents returned in one batch. Each batch requires a round trip to the server. It can be adjusted to optimize performance and limit data transfer.

Note

batch_size can not override MongoDB’s internal limits on the amount of data it will return to the client in a single batch (i.e if you set batch size to 1,000,000,000, MongoDB will currently only return 4-16MB of results per batch).

Raises TypeError if batch_size is not an integer. Raises ValueError if batch_size is less than 0. Raises InvalidOperation if this Cursor has already been used. The last batch_size applied to this cursor takes precedence.

Parameters:
  • batch_size: The size of each batch of results requested.
clone()

Get a clone of this cursor.

coroutine close()

Explicitly kill this cursor on the server. Call like (in Tornado):

yield cursor.close()
collation(collation)

Adds a Collation to this query.

This option is only supported on MongoDB 3.4 and above.

Raises TypeError if collation is not an instance of Collation or a dict. Raises InvalidOperation if this Cursor has already been used. Only the last collation applied to this cursor has any effect.

Parameters:
  • collation: An instance of Collation.
comment(comment)

Adds a ‘comment’ to the cursor.

http://docs.mongodb.org/manual/reference/operator/comment/

Parameters:
  • comment: A string to attach to the query to help interpret and trace the operation in the server logs and in profile data.
coroutine count(with_limit_and_skip=False)

DEPRECATED - Get the size of the results set for this query.

The count() method is deprecated and not supported in a transaction. Please use count_documents() instead.

Returns the number of documents in the results set for this query. Does not take limit() and skip() into account by default - set with_limit_and_skip to True if that is the desired behavior. Raises OperationFailure on a database error.

When used with MongoDB >= 2.6, count() uses any hint() applied to the query. In the following example the hint is passed to the count command:

collection.find({‘field’: ‘value’}).hint(‘field_1’).count()

The count() method obeys the read_preference of the Collection instance on which find() was called.

Parameters:
  • with_limit_and_skip (optional): take any limit() or skip() that has been applied to this cursor into account when getting the count

Note

The with_limit_and_skip parameter requires server version >= 1.1.4-

coroutine distinct(key)

Get a list of distinct values for key among all documents in the result set of this query.

Raises TypeError if key is not an instance of basestring (str in python 3).

The distinct() method obeys the read_preference of the Collection instance on which find() was called.

Parameters:
  • key: name of key for which we want to get the distinct values

See also

pymongo.collection.Collection.distinct()

each(callback)

Iterates over all the documents for this cursor.

each() returns immediately, and callback is executed asynchronously for each document. callback is passed (None, None) when iteration is complete.

Cancel iteration early by returning False from the callback. (Only False cancels iteration: returning None or 0 does not.)

>>> def inserted(result, error):
...     if error:
...         raise error
...     cursor = collection.find().sort([('_id', 1)])
...     cursor.each(callback=each)
...
>>> def each(result, error):
...     if error:
...         raise error
...     elif result:
...         sys.stdout.write(str(result['_id']) + ', ')
...     else:
...         # Iteration complete
...         IOLoop.current().stop()
...         print('done')
...
>>> collection.insert_many(
...     [{'_id': i} for i in range(5)], callback=inserted)
>>> IOLoop.current().start()
0, 1, 2, 3, 4, done

Note

Unlike other Motor methods, each requires a callback and does not return a Future, so it cannot be used in a coroutine. async for, to_list(), fetch_next are much easier to use.

Parameters:
  • callback: function taking (document, error)
coroutine explain()

Returns an explain plan record for this cursor.

Note

Starting with MongoDB 3.2 explain() uses the default verbosity mode of the explain command, allPlansExecution. To use a different verbosity use command() to run the explain command directly.

See also

The MongoDB documentation on

explain

hint(index)

Adds a ‘hint’, telling Mongo the proper index to use for the query.

Judicious use of hints can greatly improve query performance. When doing a query on multiple fields (at least one of which is indexed) pass the indexed field as a hint to the query. Raises OperationFailure if the provided hint requires an index that does not exist on this collection, and raises InvalidOperation if this cursor has already been used.

index should be an index as passed to create_index() (e.g. [('field', ASCENDING)]) or the name of the index. If index is None any existing hint for this query is cleared. The last hint applied to this cursor takes precedence over all others.

Parameters:
  • index: index to hint on (as an index specifier)
limit(limit)

Limits the number of results to be returned by this cursor.

Raises TypeError if limit is not an integer. Raises InvalidOperation if this Cursor has already been used. The last limit applied to this cursor takes precedence. A limit of 0 is equivalent to no limit.

Parameters:
  • limit: the number of results to return

See also

The MongoDB documentation on

limit

max(spec)

Adds max operator that specifies upper bound for specific index.

When using max, hint() should also be configured to ensure the query uses the expected index and starting in MongoDB 4.2 hint() will be required.

Parameters:
  • spec: a list of field, limit pairs specifying the exclusive upper bound for all keys of a specific index in order.
max_await_time_ms(max_await_time_ms)

Specifies a time limit for a getMore operation on a TAILABLE_AWAIT cursor. For all other types of cursor max_await_time_ms is ignored.

Raises TypeError if max_await_time_ms is not an integer or None. Raises InvalidOperation if this Cursor has already been used.

Note

max_await_time_ms requires server version >= 3.2

Parameters:
  • max_await_time_ms: the time limit after which the operation is aborted
max_scan(max_scan)

DEPRECATED - Limit the number of documents to scan when performing the query.

Raises InvalidOperation if this cursor has already been used. Only the last max_scan() applied to this cursor has any effect.

Parameters:
  • max_scan: the maximum number of documents to scan
max_time_ms(max_time_ms)

Specifies a time limit for a query operation. If the specified time is exceeded, the operation will be aborted and ExecutionTimeout is raised. If max_time_ms is None no limit is applied.

Raises TypeError if max_time_ms is not an integer or None. Raises InvalidOperation if this Cursor has already been used.

Parameters:
  • max_time_ms: the time limit after which the operation is aborted
min(spec)

Adds min operator that specifies lower bound for specific index.

When using min, hint() should also be configured to ensure the query uses the expected index and starting in MongoDB 4.2 hint() will be required.

Parameters:
  • spec: a list of field, limit pairs specifying the inclusive lower bound for all keys of a specific index in order.
next_object()

Get a document from the most recently fetched batch, or None. See fetch_next.

remove_option(mask)

Unset arbitrary query flags using a bitmask.

To unset the tailable flag: cursor.remove_option(2)

rewind()

Rewind this cursor to its unevaluated state.

skip(skip)

Skips the first skip results of this cursor.

Raises TypeError if skip is not an integer. Raises ValueError if skip is less than 0. Raises InvalidOperation if this Cursor has already been used. The last skip applied to this cursor takes precedence.

Parameters:
  • skip: the number of results to skip
sort(key_or_list, direction=None)

Sorts this cursor’s results.

Pass a field name and a direction, either ASCENDING or DESCENDING:

>>> @gen.coroutine
... def f():
...     cursor = collection.find().sort('_id', pymongo.DESCENDING)
...     docs = yield cursor.to_list(None)
...     print([d['_id'] for d in docs])
...
>>> IOLoop.current().run_sync(f)
[4, 3, 2, 1, 0]

To sort by multiple fields, pass a list of (key, direction) pairs:

>>> @gen.coroutine
... def f():
...     cursor = collection.find().sort([
...         ('field1', pymongo.ASCENDING),
...         ('field2', pymongo.DESCENDING)])
...
...     docs = yield cursor.to_list(None)
...     print([(d['field1'], d['field2']) for d in docs])
...
>>> IOLoop.current().run_sync(f)
[(0, 4), (0, 2), (0, 0), (1, 3), (1, 1)]

Beginning with MongoDB version 2.6, text search results can be sorted by relevance:

>>> @gen.coroutine
... def f():
...     cursor = collection.find({
...         '$text': {'$search': 'some words'}},
...         {'score': {'$meta': 'textScore'}})
...
...     # Sort by 'score' field.
...     cursor.sort([('score', {'$meta': 'textScore'})])
...     docs = yield cursor.to_list(None)
...     for doc in docs:
...         print('%.1f %s' % (doc['score'], doc['field']))
...
>>> IOLoop.current().run_sync(f)
1.5 words about some words
1.0 words

Raises InvalidOperation if this cursor has already been used. Only the last sort() applied to this cursor has any effect.

Parameters:
  • key_or_list: a single key or a list of (key, direction) pairs specifying the keys to sort on
  • direction (optional): only used if key_or_list is a single key, if not given ASCENDING is assumed
coroutine to_list(length, callback=None)

Get a list of documents.

>>> from motor.motor_tornado import MotorClient
>>> collection = MotorClient().test.test_collection
>>>
>>> @gen.coroutine
... def f():
...     yield collection.insert_many([{'_id': i} for i in range(4)])
...     cursor = collection.find().sort([('_id', 1)])
...     docs = yield cursor.to_list(length=2)
...     while docs:
...         print(docs)
...         docs = yield cursor.to_list(length=2)
...
...     print('done')
...
>>> ioloop.IOLoop.current().run_sync(f)
[{'_id': 0}, {'_id': 1}]
[{'_id': 2}, {'_id': 3}]
done
Parameters:
  • length: maximum number of documents to return for this call, or None
  • callback (optional): function taking (documents, error)

If a callback is passed, returns None, else returns a Future.

Changed in version 0.2: callback must be passed as a keyword argument, like to_list(10, callback=callback), and the length parameter is no longer optional.

where(code)

Adds a $where clause to this query.

The code argument must be an instance of basestring (str in python 3) or Code containing a JavaScript expression. This expression will be evaluated for each document scanned. Only those documents for which the expression evaluates to true will be returned as results. The keyword this refers to the object currently being scanned.

Raises TypeError if code is not an instance of basestring (str in python 3). Raises InvalidOperation if this Cursor has already been used. Only the last call to where() applied to a Cursor has any effect.

Parameters:
  • code: JavaScript expression to use as a filter
address

The (host, port) of the server used, or None.

Changed in version 3.0: Renamed from “conn_id”.

alive

Does this cursor have the potential to return more data?

This is mostly useful with tailable cursors since they will stop iterating even though they may return more results in the future.

With regular cursors, simply use a for loop instead of alive:

for doc in collection.find():
    print(doc)

Note

Even if alive is True, next() can raise StopIteration. alive can also be True while iterating a cursor from a failed server. In this case alive will return False after next() fails to retrieve the next batch of results from the server.

cursor_id

Returns the id of the cursor

Useful if you need to manage cursor ids and want to handle killing cursors manually using kill_cursors()

New in version 2.2.

fetch_next

A Future used with gen.coroutine to asynchronously retrieve the next document in the result set, fetching a batch of documents from the server if necessary. Resolves to False if there are no more documents, otherwise next_object() is guaranteed to return a document.

>>> @gen.coroutine
... def f():
...     yield collection.insert_many([{'_id': i} for i in range(5)])
...     cursor = collection.find().sort([('_id', 1)])
...     while (yield cursor.fetch_next):
...         doc = cursor.next_object()
...         sys.stdout.write(str(doc['_id']) + ', ')
...     print('done')
...
>>> IOLoop.current().run_sync(f)
0, 1, 2, 3, 4, done

While it appears that fetch_next retrieves each document from the server individually, the cursor actually fetches documents efficiently in large batches.

In Python 3.5 and newer, cursors can be iterated elegantly and very efficiently in native coroutines with async for:

>>> async def f():
...     async for doc in collection.find():
...         sys.stdout.write(str(doc['_id']) + ', ')
...     print('done')
...
>>> IOLoop.current().run_sync(f)
0, 1, 2, 3, 4, done
session

The cursor’s ClientSession, or None.

New in version 3.6.