Motor GridFS Classes

Store blobs of data in GridFS.

class motor.motor_tornado.MotorGridFSBucket(database, bucket_name='fs', chunk_size_bytes=261120, write_concern=None, read_preference=None, collection=None)

Create a handle to a GridFS bucket.

Raises ConfigurationError if write_concern is not acknowledged.

This class conforms to the GridFS API Spec for MongoDB drivers.

Parameters:
  • database: database to use.

  • bucket_name (optional): The name of the bucket. Defaults to ‘fs’.

  • chunk_size_bytes (optional): The chunk size in bytes. Defaults to 255KB.

  • write_concern (optional): The WriteConcern to use. If None (the default) db.write_concern is used.

  • read_preference (optional): The read preference to use. If None (the default) db.read_preference is used.

  • collection (optional): Deprecated, an alias for bucket_name that exists solely to provide backwards compatibility.

Changed in version 3.0: Removed support for the disable_md5 parameter (to match the GridFSBucket class in PyMongo).

Changed in version 2.1: Added support for the bucket_name, chunk_size_bytes, write_concern, and read_preference parameters. Deprecated the collection parameter which is now an alias to bucket_name (to match the GridFSBucket class in PyMongo).

New in version 1.0.

See also

The MongoDB documentation on

gridfs

coroutine delete(file_id: Any, session: ClientSession | None = None) None

Delete a file’s metadata and data chunks from a GridFS bucket:

async def delete():
    my_db = AsyncIOMotorClient().test
    fs = AsyncIOMotorGridFSBucket(my_db)
    # Get _id of file to delete
    file_id = await fs.upload_from_stream("test_file",
                                          b"data I want to store!")
    await fs.delete(file_id)

Raises NoFile if no file with file_id exists.

Parameters:
coroutine download_to_stream(file_id: Any, destination: Any, session: ClientSession | None = None) None

Downloads the contents of the stored file specified by file_id and writes the contents to destination:

async def download():
    my_db = AsyncIOMotorClient().test
    fs = AsyncIOMotorGridFSBucket(my_db)
    # Get _id of file to read
    file_id = await fs.upload_from_stream("test_file",
                                          b"data I want to store!")
    # Get file to write to
    file = open('myfile','wb+')
    await fs.download_to_stream(file_id, file)
    file.seek(0)
    contents = file.read()

Raises NoFile if no file with file_id exists.

Parameters:
  • file_id: The _id of the file to be downloaded.

  • destination: a file-like object implementing write().

  • session (optional): a ClientSession, created with start_session().

coroutine download_to_stream_by_name(filename: str, destination: Any, revision: int = -1, session: ClientSession | None = None) None

Write the contents of filename (with optional revision) to destination.

For example:

async def download_by_name():
    my_db = AsyncIOMotorClient().test
    fs = AsyncIOMotorGridFSBucket(my_db)
    # Get file to write to
    file = open('myfile','wb')
    await fs.download_to_stream_by_name("test_file", file)

Raises NoFile if no such version of that file exists.

Raises ValueError if filename is not a string.

Parameters:
  • filename: The name of the file to read from.

  • destination: A file-like object that implements write().

  • revision (optional): Which revision (documents with the same filename and different uploadDate) of the file to retrieve. Defaults to -1 (the most recent revision).

  • session (optional): a ClientSession, created with start_session().

Note:

Revision numbers are defined as follows:

  • 0 = the original stored file

  • 1 = the first revision

  • 2 = the second revision

  • etc…

  • -2 = the second most recent revision

  • -1 = the most recent revision

find(*args, **kwargs)

Find and return the files collection documents that match filter.

Returns a cursor that iterates across files matching arbitrary queries on the files collection. Can be combined with other modifiers for additional control.

For example:

cursor = bucket.find({"filename": "lisa.txt"}, no_cursor_timeout=True)
while (await cursor.fetch_next):
    grid_out = cursor.next_object()
    data = await grid_out.read()

This iterates through all versions of “lisa.txt” stored in GridFS. Note that setting no_cursor_timeout to True may be important to prevent the cursor from timing out during long multi-file processing work.

As another example, the call:

most_recent_three = fs.find().sort("uploadDate", -1).limit(3)

would return a cursor to the three most recently uploaded files in GridFS.

Follows a similar interface to find() in MotorCollection.

Parameters:
  • filter: Search query.

  • batch_size (optional): The number of documents to return per batch.

  • limit (optional): The maximum number of documents to return.

  • no_cursor_timeout (optional): The server normally times out idle cursors after an inactivity period (10 minutes) to prevent excess memory use. Set this option to True prevent that.

  • skip (optional): The number of documents to skip before returning.

  • sort (optional): The order by which to sort results. Defaults to None.

  • session (optional): a ClientSession, created with start_session().

If a ClientSession is passed to find(), all returned MotorGridOut instances are associated with that session.

Changed in version 1.2: Added session parameter.

coroutine async open_download_stream(file_id: Any, session: ClientSession | None = None) GridOut

Opens a stream to read the contents of the stored file specified by file_id:

async def download_stream():
    my_db = AsyncIOMotorClient().test
    fs = AsyncIOMotorGridFSBucket(my_db)
    # get _id of file to read.
    file_id = await fs.upload_from_stream("test_file",
                                          b"data I want to store!")
    grid_out = await fs.open_download_stream(file_id)
    contents = await grid_out.read()

Raises NoFile if no file with file_id exists.

Parameters:

Returns a AsyncIOMotorGridOut.

coroutine async open_download_stream_by_name(filename: str, revision: int = -1, session: ClientSession | None = None) GridOut

Opens a stream to read the contents of filename and optional revision:

async def download_by_name():
    my_db = AsyncIOMotorClient().test
    fs = AsyncIOMotorGridFSBucket(my_db)
    # get _id of file to read.
    file_id = await fs.upload_from_stream("test_file",
                                          b"data I want to store!")
    grid_out = await fs.open_download_stream_by_name(file_id)
    contents = await grid_out.read()

Raises NoFile if no such version of that file exists.

Raises ValueError filename is not a string.

Parameters:
  • filename: The name of the file to read from.

  • revision (optional): Which revision (documents with the same filename and different uploadDate) of the file to retrieve. Defaults to -1 (the most recent revision).

  • session (optional): a ClientSession, created with start_session().

Returns a AsyncIOMotorGridOut.

Note:

Revision numbers are defined as follows:

  • 0 = the original stored file

  • 1 = the first revision

  • 2 = the second revision

  • etc…

  • -2 = the second most recent revision

  • -1 = the most recent revision

open_upload_stream(filename: str, chunk_size_bytes: int | None = None, metadata: Mapping[str, Any] | None = None, session: ClientSession | None = None) GridIn

Opens a stream for writing.

Specify the filename, and add any additional information in the metadata field of the file document or modify the chunk size:

async def upload():
    my_db = AsyncIOMotorClient().test
    fs = AsyncIOMotorGridFSBucket(my_db)
    grid_in = fs.open_upload_stream(
        "test_file", metadata={"contentType": "text/plain"})

    await grid_in.write(b"data I want to store!")
    await grid_in.close()  # uploaded on close

Returns an instance of AsyncIOMotorGridIn.

Raises NoFile if no such version of that file exists. Raises ValueError if filename is not a string.

In a native coroutine, the “async with” statement calls close() automatically:

async def upload():
    my_db = AsyncIOMotorClient().test
    fs = AsyncIOMotorGridFSBucket(my_db)
    async with await fs.open_upload_stream(
        "test_file", metadata={"contentType": "text/plain"}) as gridin:
        await gridin.write(b'First part\n')
        await gridin.write(b'Second part')
Parameters:
  • filename: The name of the file to upload.

  • chunk_size_bytes (options): The number of bytes per chunk of this file. Defaults to the chunk_size_bytes in AsyncIOMotorGridFSBucket.

  • metadata (optional): User data for the ‘metadata’ field of the files collection document. If not provided the metadata field will be omitted from the files collection document.

  • session (optional): a ClientSession, created with start_session().

open_upload_stream_with_id(file_id: Any, filename: str, chunk_size_bytes: int | None = None, metadata: Mapping[str, Any] | None = None, session: ClientSession | None = None) GridIn

Opens a stream for writing.

Specify the filed_id and filename, and add any additional information in the metadata field of the file document, or modify the chunk size:

async def upload():
    my_db = AsyncIOMotorClient().test
    fs = AsyncIOMotorGridFSBucket(my_db)
    grid_in = fs.open_upload_stream_with_id(
        ObjectId(), "test_file",
        metadata={"contentType": "text/plain"})

    await grid_in.write(b"data I want to store!")
    await grid_in.close()  # uploaded on close

Returns an instance of AsyncIOMotorGridIn.

Raises NoFile if no such version of that file exists. Raises ValueError if filename is not a string.

Parameters:
  • file_id: The id to use for this file. The id must not have already been used for another file.

  • filename: The name of the file to upload.

  • chunk_size_bytes (options): The number of bytes per chunk of this file. Defaults to the chunk_size_bytes in AsyncIOMotorGridFSBucket.

  • metadata (optional): User data for the ‘metadata’ field of the files collection document. If not provided the metadata field will be omitted from the files collection document.

  • session (optional): a ClientSession, created with start_session().

coroutine rename(file_id: Any, new_filename: str, session: ClientSession | None = None) None

Renames the stored file with the specified file_id.

For example:

async def rename():
    my_db = AsyncIOMotorClient().test
    fs = AsyncIOMotorGridFSBucket(my_db)
    # get _id of file to read.
    file_id = await fs.upload_from_stream("test_file",
                                          b"data I want to store!")

    await fs.rename(file_id, "new_test_name")

Raises NoFile if no file with file_id exists.

Parameters:
  • file_id: The _id of the file to be renamed.

  • new_filename: The new name of the file.

coroutine upload_from_stream(filename: str, source: Any, chunk_size_bytes: int | None = None, metadata: Mapping[str, Any] | None = None, session: ClientSession | None = None) ObjectId

Uploads a user file to a GridFS bucket.

Reads the contents of the user file from source and uploads it to the file filename. Source can be a string or file-like object. For example:

async def upload_from_stream():
    my_db = AsyncIOMotorClient().test
    fs = AsyncIOMotorGridFSBucket(my_db)
    file_id = await fs.upload_from_stream(
        "test_file",
        b"data I want to store!",
        metadata={"contentType": "text/plain"})

Raises NoFile if no such version of that file exists. Raises ValueError if filename is not a string.

Parameters:
  • filename: The name of the file to upload.

  • source: The source stream of the content to be uploaded. Must be a file-like object that implements read() or a string.

  • chunk_size_bytes (options): The number of bytes per chunk of this file. Defaults to the chunk_size_bytes of AsyncIOMotorGridFSBucket.

  • metadata (optional): User data for the ‘metadata’ field of the files collection document. If not provided the metadata field will be omitted from the files collection document.

  • session (optional): a ClientSession, created with start_session().

Returns the _id of the uploaded file.

coroutine upload_from_stream_with_id(file_id: Any, filename: str, source: Any, chunk_size_bytes: int | None = None, metadata: Mapping[str, Any] | None = None, session: ClientSession | None = None) None

Uploads a user file to a GridFS bucket with a custom file id.

Reads the contents of the user file from source and uploads it to the file filename. Source can be a string or file-like object. For example:

async def upload_from_stream_with_id():
    my_db = AsyncIOMotorClient().test
    fs = AsyncIOMotorGridFSBucket(my_db)
    file_id = await fs.upload_from_stream_with_id(
        ObjectId(),
        "test_file",
        b"data I want to store!",
        metadata={"contentType": "text/plain"})

Raises NoFile if no such version of that file exists. Raises ValueError if filename is not a string.

Parameters:
  • file_id: The id to use for this file. The id must not have already been used for another file.

  • filename: The name of the file to upload.

  • source: The source stream of the content to be uploaded. Must be a file-like object that implements read() or a string.

  • chunk_size_bytes (options): The number of bytes per chunk of this file. Defaults to the chunk_size_bytes of AsyncIOMotorGridFSBucket.

  • metadata (optional): User data for the ‘metadata’ field of the files collection document. If not provided the metadata field will be omitted from the files collection document.

  • session (optional): a ClientSession, created with start_session().

class motor.motor_tornado.MotorGridIn(root_collection, delegate=None, session=None, **kwargs)

Class to write data to GridFS. Application developers should not generally need to instantiate this class - see open_upload_stream().

Any of the file level options specified in the GridFS Spec may be passed as keyword arguments. Any additional keyword arguments will be set as additional fields on the file document. Valid keyword arguments include:

  • "_id": unique ID for this file (default: ObjectId) - this "_id" must not have already been used for another file

  • "filename": human name for the file

  • "contentType" or "content_type": valid mime-type for the file

  • "chunkSize" or "chunk_size": size of each of the chunks, in bytes (default: 256 kb)

  • "encoding": encoding used for this file. In Python 2, any unicode that is written to the file will be converted to a str. In Python 3, any str that is written to the file will be converted to bytes.

Parameters:
  • root_collection: root collection to write to

  • session (optional): a ClientSession to use for all commands

  • **kwargs (optional): file level options (see above)

Changed in version 3.0: Removed support for the disable_md5 parameter (to match the GridIn class in PyMongo).

Changed in version 0.2: open method removed, no longer needed.

coroutine abort() None

Remove all chunks/files that may have been uploaded and close.

coroutine close() None

Flush the file and close it.

A closed file cannot be written any more. Calling close() more than once is allowed.

coroutine set(name: str, value: Any) None

Set an arbitrary metadata attribute on the file. Stores value on the server as a key-value pair within the file document once the file is closed. If the file is already closed, calling set() will immediately update the file document on the server.

Metadata set on the file appears as attributes on a MotorGridOut object created from the file.

Parameters:
  • name: Name of the attribute, will be stored as a key in the file document on the server

  • value: Value of the attribute

coroutine write(data: Any) None

Write data to the file. There is no return value.

data can be either a string of bytes or a file-like object (implementing read()). If the file has an encoding attribute, data can also be a str instance, which will be encoded as encoding before being written.

Due to buffering, the data may not actually be written to the database until the close() method is called. Raises ValueError if this file is already closed. Raises TypeError if data is not an instance of bytes, a file-like object, or an instance of str. Unicode data is only allowed if the file has an encoding attribute.

Parameters:
  • data: string of bytes or file-like object to be written to the file

coroutine writelines(sequence: Iterable[Any]) None

Write a sequence of strings to the file.

Does not add separators.

property chunk_size

Chunk size for this file.

This attribute is read-only.

property closed

Is this file closed?

property content_type

DEPRECATED, will be removed in PyMongo 5.0. Mime-type for this file.

property filename

Name of this file.

property length

Length (in bytes) of this file.

This attribute is read-only and can only be read after close() has been called.

property name

Alias for filename.

property read

A method on the wrapped PyMongo object that does no I/O and can be called synchronously

property readable

A method on the wrapped PyMongo object that does no I/O and can be called synchronously

property seekable

A method on the wrapped PyMongo object that does no I/O and can be called synchronously

property upload_date

Date that this file was uploaded.

This attribute is read-only and can only be read after close() has been called.

property writeable

A method on the wrapped PyMongo object that does no I/O and can be called synchronously

class motor.motor_tornado.MotorGridOut(root_collection, file_id=None, file_document=None, delegate=None, session=None)
coroutine open()

Retrieve this file’s attributes from the server.

Returns a Future.

Changed in version 2.0: No longer accepts a callback argument.

Changed in version 0.2: MotorGridOut now opens itself on demand, calling open explicitly is rarely needed.

coroutine read(size: int = -1) bytes

Read at most size bytes from the file (less if there isn’t enough data).

The bytes are returned as an instance of bytes If size is negative or omitted all data is read.

Parameters:
  • size (optional): the number of bytes to read

coroutine readchunk() bytes

Reads a chunk at a time. If the current position is within a chunk the remainder of the chunk is returned.

coroutine readline(size: int = -1) bytes

Read one line or up to size bytes from the file.

Parameters:
  • size (optional): the maximum number of bytes to read

async stream_to_handler(request_handler)

Write the contents of this file to a tornado.web.RequestHandler. This method calls flush() on the RequestHandler, so ensure all headers have already been set. For a more complete example see the implementation of GridFSHandler.

class FileHandler(tornado.web.RequestHandler):
    @tornado.web.asynchronous
    @gen.coroutine
    def get(self, filename):
        db = self.settings["db"]
        fs = await motor.MotorGridFSBucket(db())
        try:
            gridout = await fs.open_download_stream_by_name(filename)
        except gridfs.NoFile:
            raise tornado.web.HTTPError(404)

        self.set_header("Content-Type", gridout.content_type)
        self.set_header("Content-Length", gridout.length)
        await gridout.stream_to_handler(self)
        self.finish()

See also

Tornado RequestHandler

property aliases

DEPRECATED, will be removed in PyMongo 5.0. List of aliases for this file.

This attribute is read-only.

property chunk_size

Chunk size for this file.

This attribute is read-only.

property close

Make GridOut more generically file-like.

property content_type

DEPRECATED, will be removed in PyMongo 5.0. Mime-type for this file.

This attribute is read-only.

property filename

Name of this file.

This attribute is read-only.

property length

Length (in bytes) of this file.

This attribute is read-only.

property metadata

Metadata attached to this file.

This attribute is read-only.

property name

Alias for filename.

This attribute is read-only.

property readable

A method on the wrapped PyMongo object that does no I/O and can be called synchronously

property seek

Set the current position of this file.

Parameters:
  • pos: the position (or offset if using relative positioning) to seek to

  • whence (optional): where to seek from. os.SEEK_SET (0) for absolute file positioning, os.SEEK_CUR (1) to seek relative to the current position, os.SEEK_END (2) to seek relative to the file’s end.

Changed in version 4.1: The method now returns the new position in the file, to conform to the behavior of io.IOBase.seek().

property seekable

A method on the wrapped PyMongo object that does no I/O and can be called synchronously

property tell

Return the current position of this file.

property upload_date

Date that this file was first uploaded.

This attribute is read-only.

property write

A method on the wrapped PyMongo object that does no I/O and can be called synchronously

class motor.motor_tornado.MotorGridOutCursor(cursor, collection)

Don’t construct a cursor yourself, but acquire one from methods like MotorCollection.find() or MotorCollection.aggregate().

Note

There is no need to manually close cursors; they are closed by the server after being fully iterated with to_list(), each(), or async for, or automatically closed by the client when the MotorCursor is cleaned up by the garbage collector.

allow_disk_use(allow_disk_use: bool) Cursor[_DocumentType]

Specifies whether MongoDB can use temporary disk files while processing a blocking sort operation.

Raises TypeError if allow_disk_use is not a boolean.

Note

allow_disk_use requires server version >= 4.4

Parameters:
  • allow_disk_use: if True, MongoDB may use temporary disk files to store data exceeding the system memory limit while processing a blocking sort operation.

clone()

Get a clone of this cursor.

async close()

Explicitly kill this cursor on the server.

Call like:

await cursor.close()
collation(collation: _CollationIn | None) Cursor[_DocumentType]

Adds a Collation to this query.

Raises TypeError if collation is not an instance of Collation or a dict. Raises InvalidOperation if this Cursor has already been used. Only the last collation applied to this cursor has any effect.

Parameters:
comment(comment: Any) Cursor[_DocumentType]

Adds a ‘comment’ to the cursor.

http://mongodb.com/docs/manual/reference/operator/comment/

Parameters:
  • comment: A string to attach to the query to help interpret and trace the operation in the server logs and in profile data.

coroutine distinct(key: str) list

Get a list of distinct values for key among all documents in the result set of this query.

Raises TypeError if key is not an instance of str.

The distinct() method obeys the read_preference of the Collection instance on which find() was called.

Parameters:
  • key: name of key for which we want to get the distinct values

each(callback)

Iterates over all the documents for this cursor.

each() returns immediately, and callback is executed asynchronously for each document. callback is passed (None, None) when iteration is complete.

Cancel iteration early by returning False from the callback. (Only False cancels iteration: returning None or 0 does not.)

>>> def each(result, error):
...     if error:
...         raise error
...     elif result:
...         sys.stdout.write(str(result["_id"]) + ", ")
...     else:
...         # Iteration complete
...         IOLoop.current().stop()
...         print("done")
...
>>> cursor = collection.find().sort([("_id", 1)])
>>> cursor.each(callback=each)
>>> IOLoop.current().start()
0, 1, 2, 3, 4, done

Note

Unlike other Motor methods, each requires a callback and does not return a Future, so it cannot be used in a coroutine. async for and to_list() are much easier to use.

Parameters:
  • callback: function taking (document, error)

coroutine explain() _DocumentType

Returns an explain plan record for this cursor.

Note

This method uses the default verbosity mode of the explain command, allPlansExecution. To use a different verbosity use command() to run the explain command directly.

See also

The MongoDB documentation on explain.

hint(index: str | Sequence[str | Tuple[str, int | str | Mapping[str, Any]]] | Mapping[str, Any] | None) Cursor[_DocumentType]

Adds a ‘hint’, telling Mongo the proper index to use for the query.

Judicious use of hints can greatly improve query performance. When doing a query on multiple fields (at least one of which is indexed) pass the indexed field as a hint to the query. Raises OperationFailure if the provided hint requires an index that does not exist on this collection, and raises InvalidOperation if this cursor has already been used.

index should be an index as passed to create_index() (e.g. [('field', ASCENDING)]) or the name of the index. If index is None any existing hint for this query is cleared. The last hint applied to this cursor takes precedence over all others.

Parameters:
  • index: index to hint on (as an index specifier)

limit(limit: int) Cursor[_DocumentType]

Limits the number of results to be returned by this cursor.

Raises TypeError if limit is not an integer. Raises InvalidOperation if this Cursor has already been used. The last limit applied to this cursor takes precedence. A limit of 0 is equivalent to no limit.

Parameters:
  • limit: the number of results to return

See also

The MongoDB documentation on limit.

max(spec: Sequence[str | Tuple[str, int | str | Mapping[str, Any]]] | Mapping[str, Any]) Cursor[_DocumentType]

Adds max operator that specifies upper bound for specific index.

When using max, hint() should also be configured to ensure the query uses the expected index and starting in MongoDB 4.2 hint() will be required.

Parameters:
  • spec: a list of field, limit pairs specifying the exclusive upper bound for all keys of a specific index in order.

max_await_time_ms(max_await_time_ms: int | None) Cursor[_DocumentType]

Specifies a time limit for a getMore operation on a TAILABLE_AWAIT cursor. For all other types of cursor max_await_time_ms is ignored.

Raises TypeError if max_await_time_ms is not an integer or None. Raises InvalidOperation if this Cursor has already been used.

Note

max_await_time_ms requires server version >= 3.2

Parameters:
  • max_await_time_ms: the time limit after which the operation is aborted

max_scan(max_scan: int | None) Cursor[_DocumentType]

DEPRECATED - Limit the number of documents to scan when performing the query.

Raises InvalidOperation if this cursor has already been used. Only the last max_scan() applied to this cursor has any effect.

Parameters:
  • max_scan: the maximum number of documents to scan

max_time_ms(max_time_ms: int | None) Cursor[_DocumentType]

Specifies a time limit for a query operation. If the specified time is exceeded, the operation will be aborted and ExecutionTimeout is raised. If max_time_ms is None no limit is applied.

Raises TypeError if max_time_ms is not an integer or None. Raises InvalidOperation if this Cursor has already been used.

Parameters:
  • max_time_ms: the time limit after which the operation is aborted

min(spec: Sequence[str | Tuple[str, int | str | Mapping[str, Any]]] | Mapping[str, Any]) Cursor[_DocumentType]

Adds min operator that specifies lower bound for specific index.

When using min, hint() should also be configured to ensure the query uses the expected index and starting in MongoDB 4.2 hint() will be required.

Parameters:
  • spec: a list of field, limit pairs specifying the inclusive lower bound for all keys of a specific index in order.

async next()

Advance the cursor.

New in version 2.2.

next_object()

DEPRECATED - Get next GridOut object from cursor.

rewind()

Rewind this cursor to its unevaluated state.

skip(skip: int) Cursor[_DocumentType]

Skips the first skip results of this cursor.

Raises TypeError if skip is not an integer. Raises ValueError if skip is less than 0. Raises InvalidOperation if this Cursor has already been used. The last skip applied to this cursor takes precedence.

Parameters:
  • skip: the number of results to skip

sort(key_or_list: str | Sequence[str | Tuple[str, int | str | Mapping[str, Any]]] | Mapping[str, Any], direction: int | str | None = None) Cursor[_DocumentType]

Sorts this cursor’s results.

Pass a field name and a direction, either ASCENDING or DESCENDING:

>>> async def f():
...     cursor = collection.find().sort("_id", pymongo.DESCENDING)
...     docs = await cursor.to_list(None)
...     print([d["_id"] for d in docs])
...
>>> IOLoop.current().run_sync(f)
[4, 3, 2, 1, 0]

To sort by multiple fields, pass a list of (key, direction) pairs:

>>> async def f():
...     cursor = collection.find().sort(
...         [("field1", pymongo.ASCENDING), ("field2", pymongo.DESCENDING)]
...     )
...     docs = await cursor.to_list(None)
...     print([(d["field1"], d["field2"]) for d in docs])
...
>>> IOLoop.current().run_sync(f)
[(0, 4), (0, 2), (0, 0), (1, 3), (1, 1)]

Text search results can be sorted by relevance:

>>> async def f():
...     cursor = collection.find(
...         {"$text": {"$search": "some words"}}, {"score": {"$meta": "textScore"}}
...     )
...     # Sort by 'score' field.
...     cursor.sort([("score", {"$meta": "textScore"})])
...     async for doc in cursor:
...         print("%.1f %s" % (doc["score"], doc["field"]))
...
>>> IOLoop.current().run_sync(f)
1.5 words about some words
1.0 words

Raises InvalidOperation if this cursor has already been used. Only the last sort() applied to this cursor has any effect.

Parameters:
  • key_or_list: a single key or a list of (key, direction) pairs specifying the keys to sort on

  • direction (optional): only used if key_or_list is a single key, if not given ASCENDING is assumed

coroutine to_list(length)

Get a list of documents.

>>> from motor.motor_tornado import MotorClient
>>> collection = MotorClient().test.test_collection
>>>
>>> async def f():
...     cursor = collection.find().sort([("_id", 1)])
...     docs = await cursor.to_list(length=2)
...     while docs:
...         print(docs)
...         docs = await cursor.to_list(length=2)
...     print("done")
...
>>> ioloop.IOLoop.current().run_sync(f)
[{'_id': 0}, {'_id': 1}]
[{'_id': 2}, {'_id': 3}]
done
Parameters:
  • length: maximum number of documents to return for this call, or None

Returns a Future.

Changed in version 2.0: No longer accepts a callback argument.

Changed in version 0.2: callback must be passed as a keyword argument, like to_list(10, callback=callback), and the length parameter is no longer optional.

where(code: str | Code) Cursor[_DocumentType]

Adds a $where clause to this query.

The code argument must be an instance of str Code containing a JavaScript expression. This expression will be evaluated for each document scanned. Only those documents for which the expression evaluates to true will be returned as results. The keyword this refers to the object currently being scanned. For example:

# Find all documents where field "a" is less than "b" plus "c".
async for doc in db.test.find().where('this.a < (this.b + this.c)'):
    print(doc)

Raises TypeError if code is not an instance of str. Raises InvalidOperation if this MotorCursor has already been used. Only the last call to where() applied to a MotorCursor has any effect.

Note

MongoDB 4.4 drops support for Code with scope variables. Consider using $expr instead.

Parameters:
  • code: JavaScript expression to use as a filter

property address

The (host, port) of the server used, or None.

Changed in version 3.0: Renamed from “conn_id”.

property alive

Does this cursor have the potential to return more data?

This is mostly useful with tailable cursors since they will stop iterating even though they may return more results in the future.

With regular cursors, simply use a for loop instead of alive:

for doc in collection.find():
    print(doc)

Note

Even if alive is True, next() can raise StopIteration. alive can also be True while iterating a cursor from a failed server. In this case alive will return False after next() fails to retrieve the next batch of results from the server.

property cursor_id

Returns the id of the cursor

New in version 2.2.

property fetch_next

DEPRECATED - A Future used with gen.coroutine to asynchronously retrieve the next document in the result set, fetching a batch of documents from the server if necessary. Resolves to False if there are no more documents, otherwise next_object() is guaranteed to return a document:

Attention

The fetch_next property is deprecated and will be removed in Motor 3.0. Use async for to iterate elegantly and efficiently over MotorCursor objects instead.:

>>> async def f():
...     await collection.drop()
...     await collection.insert_many([{"_id": i} for i in range(5)])
...     async for doc in collection.find():
...         sys.stdout.write(str(doc["_id"]) + ", ")
...     print("done")
...
>>> IOLoop.current().run_sync(f)
0, 1, 2, 3, 4, done

While it appears that fetch_next retrieves each document from the server individually, the cursor actually fetches documents efficiently in large batches. Example usage:

>>> async def f():
...     await collection.drop()
...     await collection.insert_many([{"_id": i} for i in range(5)])
...     cursor = collection.find().sort([("_id", 1)])
...     while await cursor.fetch_next:
...         doc = cursor.next_object()
...         sys.stdout.write(str(doc["_id"]) + ", ")
...     print("done")
...
>>> IOLoop.current().run_sync(f)
0, 1, 2, 3, 4, done

Changed in version 2.2: Deprecated.

property session

The cursor’s ClientSession, or None.

New in version 3.6.