Scroll to navigation

MONGOC_REFERENCE(3) libmongoc MONGOC_REFERENCE(3)

NAME

mongoc_reference - Index

LIBMONGOC

A Cross Platform MongoDB Client Library for C

Introduction

The MongoDB C Driver, also known as "libmongoc", is a library for using MongoDB from C applications, and for writing MongoDB drivers in higher-level languages.

It depends on libbson to generate and parse BSON documents, the native data format of MongoDB.

Installing the MongoDB C Driver (libmongoc) and BSON library (libbson)

The following guide will step you through the process of downloading, building, and installing the current release of the MongoDB C Driver (libmongoc) and BSON library (libbson).

Supported Platforms

The MongoDB C Driver is continuously tested on a variety of platforms including:

  • Archlinux
  • Debian 9.2, 10.0
  • macOS 10.14
  • Microsoft Windows Server 2008, 2016
  • RHEL 6.2, 7.0, 7.1, 8.2
  • Ubuntu 16.04, 18.04
  • Clang 3.4, 3.5, 3.7, 3.8, 6.0
  • GCC 4.8, 4.9, 5.4, 6.3, 8.2, 8.3
  • MinGW-W64
  • Visual Studio 2013, 2015, 2017
  • x86, x86_64, ARM (aarch64), Power8 (ppc64le), zSeries (s390x)

Install libmongoc with a Package Manager

Several Linux distributions provide packages for libmongoc and its dependencies. One advantage of installing libmongoc with a package manager is that its dependencies (including libbson) will be installed automatically. If you choose to install libmongoc from distribution packages, use the package manager to confirm the version being installed is sufficient for your needs.

The libmongoc package is available on recent versions of Debian and Ubuntu.

$ apt-get install libmongoc-1.0-0


On Fedora, a mongo-c-driver package is available in the default repositories and can be installed with:

$ dnf install mongo-c-driver


On recent Red Hat systems, such as CentOS and RHEL 7, a mongo-c-driver package is available in the EPEL repository. To check which version is available, see https://packages.fedoraproject.org/pkgs/mongo-c-driver/mongo-c-driver/. The package can be installed with:

$ yum install mongo-c-driver


On macOS systems with Homebrew, the mongo-c-driver package can be installed with:

$ brew install mongo-c-driver


Install libbson with a Package Manager

The libbson package is available on recent versions of Debian and Ubuntu. If you have installed libmongoc, then libbson will have already been installed as a dependency. It is also possible to install libbson without libmongoc.

$ apt-get install libbson-1.0-0


On Fedora, a libbson package is available in the default repositories and can be installed with:

$ dnf install libbson


On recent Red Hat systems, such as CentOS and RHEL 7, a libbson package is available in the EPEL repository. To check which version is available, see https://apps.fedoraproject.org/packages/libbson. The package can be installed with:

$ yum install libbson


Build environment

Build environment on Unix

Prerequisites for libmongoc

OpenSSL is required for authentication or for TLS connections to MongoDB. Kerberos or LDAP support requires Cyrus SASL.

To install all optional dependencies on RedHat / Fedora:

$ sudo yum install cmake openssl-devel cyrus-sasl-devel


On Debian / Ubuntu:

$ sudo apt-get install cmake libssl-dev libsasl2-dev


On FreeBSD:

$ su -c 'pkg install cmake openssl cyrus-sasl'


Prerequisites for libbson

The only prerequisite for building libbson is cmake. The command lines above can be adjusted to install only cmake.

Build environment on macOS

Install the XCode Command Line Tools:

$ xcode-select --install


The cmake utility is also required. First install Homebrew according to its instructions, then:

$ brew install cmake


Build environment on Windows with Visual Studio

Building on Windows requires Windows Vista or newer and Visual Studio 2013 or newer. Additionally, cmake is required to generate Visual Studio project files. Installation of these components on Windows is beyond the scope of this document.

Build environment on Windows with MinGW-W64 and MSYS2

Install MSYS2 from msys2.github.io. Choose the x86_64 version, not i686.

Open the MingGW shell with c:\msys64\ming64.exe (not the msys2_shell). Install dependencies:

$ pacman --noconfirm -Syu
$ pacman --noconfirm -S mingw-w64-x86_64-gcc mingw-w64-x86_64-cmake
$ pacman --noconfirm -S mingw-w64-x86_64-extra-cmake-modules make tar
$ pacman --noconfirm -S mingw64/mingw-w64-x86_64-cyrus-sasl


Configuring the build

Before building libmongoc and/or libbson, it is necessary to configure, or prepare, the build. The steps to prepare the build depend on how you obtained the source code and the build platform.

Preparing a build from a release tarball

The most recent release of libmongoc and libbson, both of which are included in mongo-c-driver, can be downloaded here. The instructions in this document utilize cmake's out-of-source build feature to keep build artifacts separate from source files. While the $ prompt is used throughout, the instructions below will work on Linux, macOS, and Windows (assuming that CMake is in the user's shell path in all cases). See the subsequent sections for additional platform-specific instructions.

The following snippet will download and extract the driver, and configure it:

$ wget https://github.com/mongodb/mongo-c-driver/releases/download/1.23.0/mongo-c-driver-1.23.0.tar.gz
$ tar xzf mongo-c-driver-1.23.0.tar.gz
$ cd mongo-c-driver-1.23.0
$ mkdir cmake-build
$ cd cmake-build
$ cmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF ..


The -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF option is recommended, see Initialization and cleanup. Another useful cmake option is -DCMAKE_BUILD_TYPE=Release for a release optimized build and -DCMAKE_BUILD_TYPE=Debug for a debug build. For a list of all configure options, run cmake -L ...

If cmake completed successfully, you will see a considerable amount of output describing your build configuration. The final line of output should look something like this:

-- Build files have been written to: /home/user/mongo-c-driver-1.23.0/cmake-build


If cmake concludes with anything different, then it is likely an error occurred.

mongo-c-driver contains a copy of libbson, in case your system does not already have libbson installed. The configuration will detect if libbson is not installed and use the bundled libbson.

Additionally, it is possible to build only libbson by setting the -DENABLE_MONGOC=OFF option:

$ cmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF -DENABLE_MONGOC=OFF ..


A build configuration description similar to the one above will be displayed, though with fewer entries. Once the configuration is complete, the selected items can be built and installed with these commands:

Preparing a build from a git repository clone

Clone the repository and prepare the build on the current branch or a particular release tag:

$ git clone https://github.com/mongodb/mongo-c-driver.git
$ cd mongo-c-driver
$ git checkout 1.23.0  # To build a particular release
$ python build/calc_release_version.py > VERSION_CURRENT
$ mkdir cmake-build
$ cd cmake-build
$ cmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF ..


Preparing a build on Windows with Visual Studio

On the Windows platform with Visual Studio, it may be necessary to specify the CMake generator to use. This is especially important if multiple versions of Visual Studio are installed on the system or if alternate build tools (e.g., MinGW, MSYS2, Cygwin, etc.) are present on the system. Specifying the generator will ensure that the build configuration is known with certainty, rather than relying on the toolchain that CMake happens to find.

Start by generating Visual Studio project files. The following assumes you are compiling for 64-bit Windows using Visual Studio 2015 Express, which can be freely downloaded from Microsoft. The sample commands utilize cmake's out-of-source build feature to keep build artifacts separate from source files.

$ cd mongo-c-driver-1.23.0
$ mkdir cmake-build
$ cd cmake-build
$ cmake -G "Visual Studio 14 2015 Win64" \

"-DCMAKE_INSTALL_PREFIX=C:\mongo-c-driver" \
"-DCMAKE_PREFIX_PATH=C:\mongo-c-driver" \
..


(Run cmake -LH .. for a list of other options.)

To see a complete list of the CMake generators available on your specific system, use a command like this:

$ cmake --help


Executing a build

Building on Unix, macOS, and Windows (MinGW-W64 and MSYS2)

$ cmake --build .
$ sudo cmake --build . --target install


(Note that the sudo command may not be applicable or available depending on the configuration of your system.)

In the above commands, the first relies on the default target which builds all configured components. For fine grained control over what gets built, the following command can be used (for Ninja and Makefile-based build systems) to list all available targets:

$ cmake --build . help


Building on Windows with Visual Studio

Once the project files are generated, the project can be opened directly in Visual Studio or compiled from the command line.

Build using the CMake build tool mode:

$ cmake --build . --config RelWithDebInfo


Visual Studio's default build type is Debug, but we recommend a release build with debug info for production use. Now that libmongoc and libbson are compiled, install them. Components will be installed to the path specified by CMAKE_INSTALL_PREFIX.

$ cmake --build . --config RelWithDebInfo --target install


You should now see libmongoc and libbson installed in C:\mongo-c-driver

For Visual Studio 2019 (16.4 and newer), this command can be used to list all available targets:

$ cmake --build . -- /targets


Alternately, you can examine the files matching the glob *.vcxproj in the cmake-build directory.

To use the driver libraries in your program, see Using libmongoc in a Microsoft Visual Studio project.

Generating the documentation

Install Sphinx, then:

$ cmake -DENABLE_MAN_PAGES=ON -DENABLE_HTML_DOCS=ON ..
$ cmake --build . --target mongoc-doc


To build only the libbson documentation:

$ cmake -DENABLE_MAN_PAGES=ON -DENABLE_HTML_DOCS=ON ..
$ cmake --build . --target bson-doc


The -DENABLE_MAN_PAGES=ON and -DENABLE_HTML_DOCS=ON can also be added as options to a normal build from a release tarball or from git so that the documentation is built at the same time as other components.

Uninstalling the installed components

There are two ways to uninstall the components that have been installed. The first is to invoke the uninstall program directly. On Linux/Unix:

$ sudo /usr/local/share/mongo-c-driver/uninstall.sh


On Windows:

$ C:\mongo-c-driver\share\mongo-c-driver\uninstall.bat


The second way to uninstall is from within the build directory, assuming that it is in the exact same state as when the install command was invoked:

$ sudo cmake --build . --target uninstall


The second approach simply invokes the uninstall program referenced in the first approach.

Dealing with Build Failures

If your attempt to build the C driver fails, please see the README for instructions on requesting assistance.

Additional Options for Integrators

In the event that you are building the BSON library and/or the C driver to embed with other components and you wish to avoid the potential for collision with components installed from a standard build or from a distribution package manager, you can make use of the BSON_OUTPUT_BASENAME and MONGOC_OUTPUT_BASENAME options to cmake.

$ cmake -DBSON_OUTPUT_BASENAME=custom_bson -DMONGOC_OUTPUT_BASENAME=custom_mongoc ..


The above command would produce libraries named libcustom_bson.so and libcustom_mongoc.so (or with the extension appropriate for the build platform). Those libraries could be placed in a standard system directory or in an alternate location and could be linked to by specifying something like -lcustom_mongoc -lcustom_bson on the linker command line (possibly adjusting the specific flags to those required by your linker).

Tutorial

This guide offers a brief introduction to the MongoDB C Driver.

For more information on the C API, please refer to the API Reference.

Contents

Tutorial
  • Installing
  • Starting MongoDB
  • Include and link libmongoc in your C program
  • Use libmongoc in a Microsoft Visual Studio Project
  • Making a Connection
  • Creating BSON Documents
  • Basic CRUD Operations
  • Executing Commands
  • Threading
  • Next Steps


Installing

For detailed instructions on installing the MongoDB C Driver on a particular platform, please see the installation guide.

Starting MongoDB

To run the examples in this tutorial, MongoDB must be installed and running on localhost on the default port, 27017. To check if it is up and running, connect to it with the MongoDB shell.

$ mongo --host localhost --port 27017
MongoDB shell version: 3.0.6
connecting to: localhost:27017/test
>


Include mongoc.h

All libmongoc's functions and types are available in one header file. Simply include mongoc/mongoc.h:

#include <mongoc/mongoc.h>


CMake

The libmongoc installation includes a CMake config-file package, so you can use CMake's find_package command to import libmongoc's CMake target and link to libmongoc (as a shared library):

CMakeLists.txt

# Specify the minimum version you require.
find_package (mongoc-1.0 1.7 REQUIRED)
# The "hello_mongoc.c" sample program is shared among four tests.
add_executable (hello_mongoc ../../hello_mongoc.c)
target_link_libraries (hello_mongoc PRIVATE mongo::mongoc_shared)


You can also use libmongoc as a static library instead: Use the mongo::mongoc_static CMake target:

# Specify the minimum version you require.
find_package (mongoc-1.0 1.7 REQUIRED)
# The "hello_mongoc.c" sample program is shared among four tests.
add_executable (hello_mongoc ../../hello_mongoc.c)
target_link_libraries (hello_mongoc PRIVATE mongo::mongoc_static)


pkg-config

If you're not using CMake, use pkg-config on the command line to set header and library paths:

gcc -o hello_mongoc hello_mongoc.c $(pkg-config --libs --cflags libmongoc-1.0)


Or to statically link to libmongoc:

gcc -o hello_mongoc hello_mongoc.c $(pkg-config --libs --cflags libmongoc-static-1.0)


Specifying header and include paths manually

If you aren't using CMake or pkg-config, paths and libraries can be managed manually.

$ gcc -o hello_mongoc hello_mongoc.c \

-I/usr/local/include/libbson-1.0 -I/usr/local/include/libmongoc-1.0 \
-lmongoc-1.0 -lbson-1.0 $ ./hello_mongoc { "ok" : 1.000000 }


For Windows users, the code can be compiled and run with the following commands. (This assumes that the MongoDB C Driver has been installed to C:\mongo-c-driver; change the include directory as needed.)

C:\> cl.exe /IC:\mongo-c-driver\include\libbson-1.0 /IC:\mongo-c-driver\include\libmongoc-1.0 hello_mongoc.c
C:\> hello_mongoc
{ "ok" : 1.000000 }


Use libmongoc in a Microsoft Visual Studio Project

See the libmongoc and Visual Studio guide.

Making a Connection

Access MongoDB with a mongoc_client_t. It transparently connects to standalone servers, replica sets and sharded clusters on demand. To perform operations on a database or collection, create a mongoc_database_t or mongoc_collection_t struct from the mongoc_client_t.

At the start of an application, call mongoc_init() before any other libmongoc functions. At the end, call the appropriate destroy function for each collection, database, or client handle, in reverse order from how they were constructed. Call mongoc_cleanup() before exiting.

The example below establishes a connection to a standalone server on localhost, registers the client application as "connect-example," and performs a simple command.

More information about database operations can be found in the CRUD Operations and Executing Commands sections. Examples of connecting to replica sets and sharded clusters can be found on the Advanced Connections page.

hello_mongoc.c

#include <mongoc/mongoc.h>
int
main (int argc, char *argv[])
{

const char *uri_string = "mongodb://localhost:27017";
mongoc_uri_t *uri;
mongoc_client_t *client;
mongoc_database_t *database;
mongoc_collection_t *collection;
bson_t *command, reply, *insert;
bson_error_t error;
char *str;
bool retval;
/*
* Required to initialize libmongoc's internals
*/
mongoc_init ();
/*
* Optionally get MongoDB URI from command line
*/
if (argc > 1) {
uri_string = argv[1];
}
/*
* Safely create a MongoDB URI object from the given string
*/
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
/*
* Create a new client instance
*/
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
/*
* Register the application name so we can track it in the profile logs
* on the server. This can also be done from the URI (see other examples).
*/
mongoc_client_set_appname (client, "connect-example");
/*
* Get a handle on the database "db_name" and collection "coll_name"
*/
database = mongoc_client_get_database (client, "db_name");
collection = mongoc_client_get_collection (client, "db_name", "coll_name");
/*
* Do work. This example pings the database, prints the result as JSON and
* performs an insert
*/
command = BCON_NEW ("ping", BCON_INT32 (1));
retval = mongoc_client_command_simple (
client, "admin", command, NULL, &reply, &error);
if (!retval) {
fprintf (stderr, "%s\n", error.message);
return EXIT_FAILURE;
}
str = bson_as_json (&reply, NULL);
printf ("%s\n", str);
insert = BCON_NEW ("hello", BCON_UTF8 ("world"));
if (!mongoc_collection_insert_one (collection, insert, NULL, NULL, &error)) {
fprintf (stderr, "%s\n", error.message);
}
bson_destroy (insert);
bson_destroy (&reply);
bson_destroy (command);
bson_free (str);
/*
* Release our handles and clean up libmongoc
*/
mongoc_collection_destroy (collection);
mongoc_database_destroy (database);
mongoc_uri_destroy (uri);
mongoc_client_destroy (client);
mongoc_cleanup ();
return EXIT_SUCCESS; }


Creating BSON Documents

Documents are stored in MongoDB's data format, BSON. The C driver uses libbson to create BSON documents. There are several ways to construct them: appending key-value pairs, using BCON, or parsing JSON.

Appending BSON

A BSON document, represented as a bson_t in code, can be constructed one field at a time using libbson's append functions.

For example, to create a document like this:

{

born : ISODate("1906-12-09"),
died : ISODate("1992-01-01"),
name : {
first : "Grace",
last : "Hopper"
},
languages : [ "MATH-MATIC", "FLOW-MATIC", "COBOL" ],
degrees: [ { degree: "BA", school: "Vassar" }, { degree: "PhD", school: "Yale" } ] }


Use the following code:

#include <bson/bson.h>
int
main (int   argc,

char *argv[]) {
struct tm born = { 0 };
struct tm died = { 0 };
const char *lang_names[] = {"MATH-MATIC", "FLOW-MATIC", "COBOL"};
const char *schools[] = {"Vassar", "Yale"};
const char *degrees[] = {"BA", "PhD"};
uint32_t i;
char buf[16];
const char *key;
size_t keylen;
bson_t *document;
bson_t child;
bson_t child2;
char *str;
document = bson_new ();
/*
* Append { "born" : ISODate("1906-12-09") } to the document.
* Passing -1 for the length argument tells libbson to calculate the string length.
*/
born.tm_year = 6; /* years are 1900-based */
born.tm_mon = 11; /* months are 0-based */
born.tm_mday = 9;
bson_append_date_time (document, "born", -1, mktime (&born) * 1000);
/*
* Append { "died" : ISODate("1992-01-01") } to the document.
*/
died.tm_year = 92;
died.tm_mon = 0;
died.tm_mday = 1;
/*
* For convenience, this macro passes length -1 by default.
*/
BSON_APPEND_DATE_TIME (document, "died", mktime (&died) * 1000);
/*
* Append a subdocument.
*/
BSON_APPEND_DOCUMENT_BEGIN (document, "name", &child);
BSON_APPEND_UTF8 (&child, "first", "Grace");
BSON_APPEND_UTF8 (&child, "last", "Hopper");
bson_append_document_end (document, &child);
/*
* Append array of strings. Generate keys "0", "1", "2".
*/
BSON_APPEND_ARRAY_BEGIN (document, "languages", &child);
for (i = 0; i < sizeof lang_names / sizeof (char *); ++i) {
keylen = bson_uint32_to_string (i, &key, buf, sizeof buf);
bson_append_utf8 (&child, key, (int) keylen, lang_names[i], -1);
}
bson_append_array_end (document, &child);
/*
* Array of subdocuments:
* degrees: [ { degree: "BA", school: "Vassar" }, ... ]
*/
BSON_APPEND_ARRAY_BEGIN (document, "degrees", &child);
for (i = 0; i < sizeof degrees / sizeof (char *); ++i) {
keylen = bson_uint32_to_string (i, &key, buf, sizeof buf);
bson_append_document_begin (&child, key, (int) keylen, &child2);
BSON_APPEND_UTF8 (&child2, "degree", degrees[i]);
BSON_APPEND_UTF8 (&child2, "school", schools[i]);
bson_append_document_end (&child, &child2);
}
bson_append_array_end (document, &child);
/*
* Print the document as a JSON string.
*/
str = bson_as_canonical_extended_json (document, NULL);
printf ("%s\n", str);
bson_free (str);
/*
* Clean up allocated bson documents.
*/
bson_destroy (document);
return 0; }


See the libbson documentation for all of the types that can be appended to a bson_t.

Using BCON

BSON C Object Notation, BCON for short, is an alternative way of constructing BSON documents in a manner closer to the intended format. It has less type-safety than BSON's append functions but results in less code.

#include <bson/bson.h>
int
main (int   argc,

char *argv[]) {
struct tm born = { 0 };
struct tm died = { 0 };
bson_t *document;
char *str;
born.tm_year = 6;
born.tm_mon = 11;
born.tm_mday = 9;
died.tm_year = 92;
died.tm_mon = 0;
died.tm_mday = 1;
document = BCON_NEW (
"born", BCON_DATE_TIME (mktime (&born) * 1000),
"died", BCON_DATE_TIME (mktime (&died) * 1000),
"name", "{",
"first", BCON_UTF8 ("Grace"),
"last", BCON_UTF8 ("Hopper"),
"}",
"languages", "[",
BCON_UTF8 ("MATH-MATIC"),
BCON_UTF8 ("FLOW-MATIC"),
BCON_UTF8 ("COBOL"),
"]",
"degrees", "[",
"{", "degree", BCON_UTF8 ("BA"), "school", BCON_UTF8 ("Vassar"), "}",
"{", "degree", BCON_UTF8 ("PhD"), "school", BCON_UTF8 ("Yale"), "}",
"]");
/*
* Print the document as a JSON string.
*/
str = bson_as_canonical_extended_json (document, NULL);
printf ("%s\n", str);
bson_free (str);
/*
* Clean up allocated bson documents.
*/
bson_destroy (document);
return 0; }


Notice that BCON can create arrays, subdocuments and arbitrary fields.

Creating BSON from JSON

For single documents, BSON can be created from JSON strings via bson_new_from_json.

#include <bson/bson.h>
int
main (int   argc,

char *argv[]) {
bson_error_t error;
bson_t *bson;
char *string;
const char *json = "{\"name\": {\"first\":\"Grace\", \"last\":\"Hopper\"}}";
bson = bson_new_from_json ((const uint8_t *)json, -1, &error);
if (!bson) {
fprintf (stderr, "%s\n", error.message);
return EXIT_FAILURE;
}
string = bson_as_canonical_extended_json (bson, NULL);
printf ("%s\n", string);
bson_free (string);
return 0; }


To initialize BSON from a sequence of JSON documents, use bson_json_reader_t.

Basic CRUD Operations

This section demonstrates the basics of using the C Driver to interact with MongoDB.

Inserting a Document

To insert documents into a collection, first obtain a handle to a mongoc_collection_t via a mongoc_client_t. Then, use mongoc_collection_insert_one() to add BSON documents to the collection. This example inserts into the database "mydb" and collection "mycoll".

When finished, ensure that allocated structures are freed by using their respective destroy functions.

#include <bson/bson.h>
#include <mongoc/mongoc.h>
#include <stdio.h>
int
main (int   argc,

char *argv[]) {
mongoc_client_t *client;
mongoc_collection_t *collection;
bson_error_t error;
bson_oid_t oid;
bson_t *doc;
mongoc_init ();
client = mongoc_client_new ("mongodb://localhost:27017/?appname=insert-example");
collection = mongoc_client_get_collection (client, "mydb", "mycoll");
doc = bson_new ();
bson_oid_init (&oid, NULL);
BSON_APPEND_OID (doc, "_id", &oid);
BSON_APPEND_UTF8 (doc, "hello", "world");
if (!mongoc_collection_insert_one (
collection, doc, NULL, NULL, &error)) {
fprintf (stderr, "%s\n", error.message);
}
bson_destroy (doc);
mongoc_collection_destroy (collection);
mongoc_client_destroy (client);
mongoc_cleanup ();
return 0; }


Compile the code and run it:

$ gcc -o insert insert.c $(pkg-config --cflags --libs libmongoc-1.0)
$ ./insert


On Windows:

C:\> cl.exe /IC:\mongo-c-driver\include\libbson-1.0 /IC:\mongo-c-driver\include\libmongoc-1.0 insert.c
C:\> insert


To verify that the insert succeeded, connect with the MongoDB shell.

$ mongo
MongoDB shell version: 3.0.6
connecting to: test
> use mydb
switched to db mydb
> db.mycoll.find()
{ "_id" : ObjectId("55ef43766cb5f36a3bae6ee4"), "hello" : "world" }
>


Finding a Document

To query a MongoDB collection with the C driver, use the function mongoc_collection_find_with_opts(). This returns a cursor to the matching documents. The following examples iterate through the result cursors and print the matches to stdout as JSON strings.

Use a document as a query specifier; for example,

{ "color" : "red" }


will match any document with a field named "color" with value "red". An empty document {} can be used to match all documents.

This first example uses an empty query specifier to find all documents in the database "mydb" and collection "mycoll".

#include <bson/bson.h>
#include <mongoc/mongoc.h>
#include <stdio.h>
int
main (int argc, char *argv[])
{

mongoc_client_t *client;
mongoc_collection_t *collection;
mongoc_cursor_t *cursor;
const bson_t *doc;
bson_t *query;
char *str;
mongoc_init ();
client =
mongoc_client_new ("mongodb://localhost:27017/?appname=find-example");
collection = mongoc_client_get_collection (client, "mydb", "mycoll");
query = bson_new ();
cursor = mongoc_collection_find_with_opts (collection, query, NULL, NULL);
while (mongoc_cursor_next (cursor, &doc)) {
str = bson_as_canonical_extended_json (doc, NULL);
printf ("%s\n", str);
bson_free (str);
}
bson_destroy (query);
mongoc_cursor_destroy (cursor);
mongoc_collection_destroy (collection);
mongoc_client_destroy (client);
mongoc_cleanup ();
return 0; }


Compile the code and run it:

$ gcc -o find find.c $(pkg-config --cflags --libs libmongoc-1.0)
$ ./find
{ "_id" : { "$oid" : "55ef43766cb5f36a3bae6ee4" }, "hello" : "world" }


On Windows:

C:\> cl.exe /IC:\mongo-c-driver\include\libbson-1.0 /IC:\mongo-c-driver\include\libmongoc-1.0 find.c
C:\> find
{ "_id" : { "$oid" : "55ef43766cb5f36a3bae6ee4" }, "hello" : "world" }


To look for a specific document, add a specifier to query. This example adds a call to BSON_APPEND_UTF8() to look for all documents matching {"hello" : "world"}.

#include <bson/bson.h>
#include <mongoc/mongoc.h>
#include <stdio.h>
int
main (int argc, char *argv[])
{

mongoc_client_t *client;
mongoc_collection_t *collection;
mongoc_cursor_t *cursor;
const bson_t *doc;
bson_t *query;
char *str;
mongoc_init ();
client = mongoc_client_new (
"mongodb://localhost:27017/?appname=find-specific-example");
collection = mongoc_client_get_collection (client, "mydb", "mycoll");
query = bson_new ();
BSON_APPEND_UTF8 (query, "hello", "world");
cursor = mongoc_collection_find_with_opts (collection, query, NULL, NULL);
while (mongoc_cursor_next (cursor, &doc)) {
str = bson_as_canonical_extended_json (doc, NULL);
printf ("%s\n", str);
bson_free (str);
}
bson_destroy (query);
mongoc_cursor_destroy (cursor);
mongoc_collection_destroy (collection);
mongoc_client_destroy (client);
mongoc_cleanup ();
return 0; }


$ gcc -o find-specific find-specific.c $(pkg-config --cflags --libs libmongoc-1.0)
$ ./find-specific
{ "_id" : { "$oid" : "55ef43766cb5f36a3bae6ee4" }, "hello" : "world" }


C:\> cl.exe /IC:\mongo-c-driver\include\libbson-1.0 /IC:\mongo-c-driver\include\libmongoc-1.0 find-specific.c
C:\> find-specific
{ "_id" : { "$oid" : "55ef43766cb5f36a3bae6ee4" }, "hello" : "world" }


Updating a Document

This code snippet gives an example of using mongoc_collection_update_one() to update the fields of a document.

Using the "mydb" database, the following example inserts an example document into the "mycoll" collection. Then, using its _id field, the document is updated with different values and a new field.

#include <bson/bson.h>
#include <mongoc/mongoc.h>
#include <stdio.h>
int
main (int argc, char *argv[])
{

mongoc_collection_t *collection;
mongoc_client_t *client;
bson_error_t error;
bson_oid_t oid;
bson_t *doc = NULL;
bson_t *update = NULL;
bson_t *query = NULL;
mongoc_init ();
client =
mongoc_client_new ("mongodb://localhost:27017/?appname=update-example");
collection = mongoc_client_get_collection (client, "mydb", "mycoll");
bson_oid_init (&oid, NULL);
doc = BCON_NEW ("_id", BCON_OID (&oid), "key", BCON_UTF8 ("old_value"));
if (!mongoc_collection_insert_one (collection, doc, NULL, &error)) {
fprintf (stderr, "%s\n", error.message);
goto fail;
}
query = BCON_NEW ("_id", BCON_OID (&oid));
update = BCON_NEW ("$set",
"{",
"key",
BCON_UTF8 ("new_value"),
"updated",
BCON_BOOL (true),
"}");
if (!mongoc_collection_update_one (
collection, query, update, NULL, NULL, &error)) {
fprintf (stderr, "%s\n", error.message);
goto fail;
} fail:
if (doc)
bson_destroy (doc);
if (query)
bson_destroy (query);
if (update)
bson_destroy (update);
mongoc_collection_destroy (collection);
mongoc_client_destroy (client);
mongoc_cleanup ();
return 0; }


Compile the code and run it:

$ gcc -o update update.c $(pkg-config --cflags --libs libmongoc-1.0)
$ ./update


On Windows:

C:\> cl.exe /IC:\mongo-c-driver\include\libbson-1.0 /IC:\mongo-c-driver\include\libmongoc-1.0 update.c
C:\> update
{ "_id" : { "$oid" : "55ef43766cb5f36a3bae6ee4" }, "hello" : "world" }


To verify that the update succeeded, connect with the MongoDB shell.

$ mongo
MongoDB shell version: 3.0.6
connecting to: test
> use mydb
switched to db mydb
> db.mycoll.find({"updated" : true})
{ "_id" : ObjectId("55ef549236fe322f9490e17b"), "updated" : true, "key" : "new_value" }
>


Deleting a Document

This example illustrates the use of mongoc_collection_delete_one() to delete a document.

The following code inserts a sample document into the database "mydb" and collection "mycoll". Then, it deletes all documents matching {"hello" : "world"}.

#include <bson/bson.h>
#include <mongoc/mongoc.h>
#include <stdio.h>
int
main (int argc, char *argv[])
{

mongoc_client_t *client;
mongoc_collection_t *collection;
bson_error_t error;
bson_oid_t oid;
bson_t *doc;
mongoc_init ();
client =
mongoc_client_new ("mongodb://localhost:27017/?appname=delete-example");
collection = mongoc_client_get_collection (client, "test", "test");
doc = bson_new ();
bson_oid_init (&oid, NULL);
BSON_APPEND_OID (doc, "_id", &oid);
BSON_APPEND_UTF8 (doc, "hello", "world");
if (!mongoc_collection_insert_one (collection, doc, NULL, &error)) {
fprintf (stderr, "Insert failed: %s\n", error.message);
}
bson_destroy (doc);
doc = bson_new ();
BSON_APPEND_OID (doc, "_id", &oid);
if (!mongoc_collection_delete_one (
collection, doc, NULL, NULL, &error)) {
fprintf (stderr, "Delete failed: %s\n", error.message);
}
bson_destroy (doc);
mongoc_collection_destroy (collection);
mongoc_client_destroy (client);
mongoc_cleanup ();
return 0; }


Compile the code and run it:

$ gcc -o delete delete.c $(pkg-config --cflags --libs libmongoc-1.0)
$ ./delete


On Windows:

C:\> cl.exe /IC:\mongo-c-driver\include\libbson-1.0 /IC:\mongo-c-driver\include\libmongoc-1.0 delete.c
C:\> delete


Use the MongoDB shell to prove that the documents have been removed successfully.

$ mongo
MongoDB shell version: 3.0.6
connecting to: test
> use mydb
switched to db mydb
> db.mycoll.count({"hello" : "world"})
0
>


Counting Documents

Counting the number of documents in a MongoDB collection is similar to performing a find operation. This example counts the number of documents matching {"hello" : "world"} in the database "mydb" and collection "mycoll".

#include <bson/bson.h>
#include <mongoc/mongoc.h>
#include <stdio.h>
int
main (int argc, char *argv[])
{

mongoc_client_t *client;
mongoc_collection_t *collection;
bson_error_t error;
bson_t *doc;
int64_t count;
mongoc_init ();
client =
mongoc_client_new ("mongodb://localhost:27017/?appname=count-example");
collection = mongoc_client_get_collection (client, "mydb", "mycoll");
doc = bson_new_from_json (
(const uint8_t *) "{\"hello\" : \"world\"}", -1, &error);
count = mongoc_collection_count (
collection, MONGOC_QUERY_NONE, doc, 0, 0, NULL, &error);
if (count < 0) {
fprintf (stderr, "%s\n", error.message);
} else {
printf ("%" PRId64 "\n", count);
}
bson_destroy (doc);
mongoc_collection_destroy (collection);
mongoc_client_destroy (client);
mongoc_cleanup ();
return 0; }


Compile the code and run it:

$ gcc -o count count.c $(pkg-config --cflags --libs libmongoc-1.0)
$ ./count
1


On Windows:

C:\> cl.exe /IC:\mongo-c-driver\include\libbson-1.0 /IC:\mongo-c-driver\include\libmongoc-1.0 count.c
C:\> count
1


Executing Commands

The driver provides helper functions for executing MongoDB commands on client, database and collection structures. These functions return cursors; the _simple variants return booleans indicating success or failure.

This example executes the collStats command against the collection "mycoll" in database "mydb".

#include <bson/bson.h>
#include <mongoc/mongoc.h>
#include <stdio.h>
int
main (int argc, char *argv[])
{

mongoc_client_t *client;
mongoc_collection_t *collection;
bson_error_t error;
bson_t *command;
bson_t reply;
char *str;
mongoc_init ();
client = mongoc_client_new (
"mongodb://localhost:27017/?appname=executing-example");
collection = mongoc_client_get_collection (client, "mydb", "mycoll");
command = BCON_NEW ("collStats", BCON_UTF8 ("mycoll"));
if (mongoc_collection_command_simple (
collection, command, NULL, &reply, &error)) {
str = bson_as_canonical_extended_json (&reply, NULL);
printf ("%s\n", str);
bson_free (str);
} else {
fprintf (stderr, "Failed to run command: %s\n", error.message);
}
bson_destroy (command);
bson_destroy (&reply);
mongoc_collection_destroy (collection);
mongoc_client_destroy (client);
mongoc_cleanup ();
return 0; }


Compile the code and run it:

$ gcc -o executing executing.c $(pkg-config --cflags --libs libmongoc-1.0)
$ ./executing
{ "ns" : "mydb.mycoll", "count" : 1, "size" : 48, "avgObjSize" : 48, "numExtents" : 1, "storageSize" : 8192,
"lastExtentSize" : 8192.000000, "paddingFactor" : 1.000000, "userFlags" : 1, "capped" : false, "nindexes" : 1,
"indexDetails" : {  }, "totalIndexSize" : 8176, "indexSizes" : { "_id_" : 8176 }, "ok" : 1.000000 }


On Windows:

C:\> cl.exe /IC:\mongo-c-driver\include\libbson-1.0 /IC:\mongo-c-driver\include\libmongoc-1.0 executing.c
C:\> executing
{ "ns" : "mydb.mycoll", "count" : 1, "size" : 48, "avgObjSize" : 48, "numExtents" : 1, "storageSize" : 8192,
"lastExtentSize" : 8192.000000, "paddingFactor" : 1.000000, "userFlags" : 1, "capped" : false, "nindexes" : 1,
"indexDetails" : {  }, "totalIndexSize" : 8176, "indexSizes" : { "_id_" : 8176 }, "ok" : 1.000000 }


Threading

The MongoDB C Driver is thread-unaware in the vast majority of its operations. This means it is up to the programmer to guarantee thread-safety.

However, mongoc_client_pool_t is thread-safe and is used to fetch a mongoc_client_t in a thread-safe manner. After retrieving a client from the pool, the client structure should be considered owned by the calling thread. When the thread is finished, the client should be placed back into the pool.

example-pool.c

/* gcc example-pool.c -o example-pool $(pkg-config --cflags --libs

* libmongoc-1.0) */ /* ./example-pool [CONNECTION_STRING] */ #include <mongoc/mongoc.h> #include <pthread.h> #include <stdio.h> static pthread_mutex_t mutex; static bool in_shutdown = false; static void * worker (void *data) {
mongoc_client_pool_t *pool = data;
mongoc_client_t *client;
bson_t ping = BSON_INITIALIZER;
bson_error_t error;
bool r;
BSON_APPEND_INT32 (&ping, "ping", 1);
while (true) {
client = mongoc_client_pool_pop (pool);
/* Do something with client. If you are writing an HTTP server, you
* probably only want to hold onto the client for the portion of the
* request performing database queries.
*/
r = mongoc_client_command_simple (
client, "admin", &ping, NULL, NULL, &error);
if (!r) {
fprintf (stderr, "%s\n", error.message);
}
mongoc_client_pool_push (pool, client);
pthread_mutex_lock (&mutex);
if (in_shutdown || !r) {
pthread_mutex_unlock (&mutex);
break;
}
pthread_mutex_unlock (&mutex);
}
bson_destroy (&ping);
return NULL; } int main (int argc, char *argv[]) {
const char *uri_string = "mongodb://127.0.0.1/?appname=pool-example";
mongoc_uri_t *uri;
bson_error_t error;
mongoc_client_pool_t *pool;
pthread_t threads[10];
unsigned i;
void *ret;
pthread_mutex_init (&mutex, NULL);
mongoc_init ();
if (argc > 1) {
uri_string = argv[1];
}
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
pool = mongoc_client_pool_new (uri);
mongoc_client_pool_set_error_api (pool, 2);
for (i = 0; i < 10; i++) {
pthread_create (&threads[i], NULL, worker, pool);
}
sleep (10);
pthread_mutex_lock (&mutex);
in_shutdown = true;
pthread_mutex_unlock (&mutex);
for (i = 0; i < 10; i++) {
pthread_join (threads[i], &ret);
}
mongoc_client_pool_destroy (pool);
mongoc_uri_destroy (uri);
mongoc_cleanup ();
return EXIT_SUCCESS; }


Next Steps

To find information on advanced topics, browse the rest of the C driver guide or the official MongoDB documentation.

For help with common issues, consult the Troubleshooting page. To report a bug or request a new feature, follow these instructions.

Authentication

This guide covers the use of authentication options with the MongoDB C Driver. Ensure that the MongoDB server is also properly configured for authentication before making a connection. For more information, see the MongoDB security documentation.

The MongoDB C driver supports several authentication mechanisms through the use of MongoDB connection URIs.

By default, if a username and password are provided as part of the connection string (and an optional authentication database), they are used to connect via the default authentication mechanism of the server.

To select a specific authentication mechanism other than the default, see the list of supported mechanism below.

mongoc_client_t *client = mongoc_client_new ("mongodb://user:password@localhost/?authSource=mydb");


Currently supported values for the authMechanism connection string option are:

  • SCRAM-SHA-1
  • MONGODB-CR (deprecated)
  • GSSAPI
  • PLAIN
  • X509
  • MONGODB-AWS

Basic Authentication (SCRAM-SHA-256)

MongoDB 4.0 introduces support for authenticating using the SCRAM protocol with the more secure SHA-256 hash described in RFC 7677. Using this authentication mechanism means that the password is never actually sent over the wire when authenticating, but rather a computed proof that the client password is the same as the password the server knows. In MongoDB 4.0, the C driver can determine the correct default authentication mechanism for users with stored SCRAM-SHA-1 and SCRAM-SHA-256 credentials:

mongoc_client_t *client =  mongoc_client_new ("mongodb://user:password@localhost/?authSource=mydb");
/* the correct authMechanism is negotiated between the driver and server. */


Alternatively, SCRAM-SHA-256 can be explicitly specified as an authMechanism.


Passwords for SCRAM-SHA-256 undergo the preprocessing step known as SASLPrep specified in RFC 4013. SASLPrep will only be performed for passwords containing non-ASCII characters. SASLPrep requires libicu. If libicu is not available, attempting to authenticate over SCRAM-SHA-256 with non-ASCII passwords will result in error.

Usernames never undergo SASLPrep.

By default, when building the C driver libicu is linked if available. This can be changed with the ENABLE_ICU cmake option. To specify an installation path of libicu, specify ICU_ROOT as a cmake option. See the FindICU documentation for more information.

Basic Authentication (SCRAM-SHA-1)

The default authentication mechanism before MongoDB 4.0 is SCRAM-SHA-1 (RFC 5802). Using this authentication mechanism means that the password is never actually sent over the wire when authenticating, but rather a computed proof that the client password is the same as the password the server knows.

mongoc_client_t *client = mongoc_client_new ("mongodb://user:password@localhost/?authMechanism=SCRAM-SHA-1&authSource=mydb");


NOTE:

SCRAM-SHA-1 authenticates against the admin database by default. If the user is created in another database, then specifying the authSource is required.


Legacy Authentication (MONGODB-CR)

The MONGODB-CR authMechanism is deprecated and will no longer function in MongoDB 4.0. Instead, specify no authMechanism and the driver will use an authentication mechanism compatible with your server.

GSSAPI (Kerberos) Authentication

NOTE:

On UNIX-like environments, Kerberos support requires compiling the driver against cyrus-sasl.

On Windows, Kerberos support requires compiling the driver against Windows Native SSPI or cyrus-sasl. The default configuration of the driver will use Windows Native SSPI.

To modify the default configuration, use the cmake option ENABLE_SASL.



GSSAPI (Kerberos) authentication is available in the Enterprise Edition of MongoDB. To authenticate using GSSAPI, the MongoDB C driver must be installed with SASL support.

On UNIX-like environments, run the kinit command before using the following authentication methods:

$ kinit mongodbuser@EXAMPLE.COM
mongodbuser@EXAMPLE.COM's Password:
$ klistCredentials cache: FILE:/tmp/krb5cc_1000

Principal: mongodbuser@EXAMPLE.COM
Issued Expires Principal Feb 9 13:48:51 2013 Feb 9 23:48:51 2013 krbtgt/EXAMPLE.COM@EXAMPLE.COM


Now authenticate using the MongoDB URI. GSSAPI authenticates against the $external virtual database, so a database does not need to be specified in the URI. Note that the Kerberos principal must be URL-encoded:

mongoc_client_t *client;
client = mongoc_client_new ("mongodb://mongodbuser%40EXAMPLE.COM@mongo-server.example.com/?authMechanism=GSSAPI");


NOTE:

GSSAPI authenticates against the $external database, so specifying the authSource database is not required.


The driver supports these GSSAPI properties:

  • CANONICALIZE_HOST_NAME: This might be required with Cyrus-SASL when the hosts report different hostnames than what is used in the Kerberos database. The default is "false".
  • SERVICE_NAME: Use a different service name than the default, "mongodb".

Set properties in the URL:

mongoc_client_t *client;
client = mongoc_client_new ("mongodb://mongodbuser%40EXAMPLE.COM@mongo-server.example.com/?authMechanism=GSSAPI&"

"authMechanismProperties=SERVICE_NAME:other,CANONICALIZE_HOST_NAME:true");


If you encounter errors such as Invalid net address, check if the application is behind a NAT (Network Address Translation) firewall. If so, create a ticket that uses forwardable and addressless Kerberos tickets. This can be done by passing -f -A to kinit.

$ kinit -f -A mongodbuser@EXAMPLE.COM


SASL Plain Authentication

NOTE:

The MongoDB C Driver must be compiled with SASL support in order to use SASL PLAIN authentication.


MongoDB Enterprise Edition supports the SASL PLAIN authentication mechanism, initially intended for delegating authentication to an LDAP server. Using the SASL PLAIN mechanism is very similar to the challenge response mechanism with usernames and passwords. This authentication mechanism uses the $external virtual database for LDAP support:

NOTE:

SASL PLAIN is a clear-text authentication mechanism. It is strongly recommended to connect to MongoDB using TLS with certificate validation when using the PLAIN mechanism.


mongoc_client_t *client;
client = mongoc_client_new ("mongodb://user:password@example.com/?authMechanism=PLAIN");


PLAIN authenticates against the $external database, so specifying the authSource database is not required.

X.509 Certificate Authentication

NOTE:

The MongoDB C Driver must be compiled with TLS support for X.509 authentication support. Once this is done, start a server with the following options:

$ mongod --tlsMode requireTLS --tlsCertificateKeyFile server.pem --tlsCAFile ca.pem




The MONGODB-X509 mechanism authenticates a username derived from the distinguished subject name of the X.509 certificate presented by the driver during TLS negotiation. This authentication method requires the use of TLS connections with certificate validation.

mongoc_client_t *client;
mongoc_ssl_opt_t ssl_opts = { 0 };
ssl_opts.pem_file = "mycert.pem";
ssl_opts.pem_pwd = "mycertpassword";
ssl_opts.ca_file = "myca.pem";
ssl_opts.ca_dir = "trust_dir";
ssl_opts.weak_cert_validation = false;
client = mongoc_client_new ("mongodb://x509_derived_username@localhost/?authMechanism=MONGODB-X509");
mongoc_client_set_ssl_opts (client, &ssl_opts);


MONGODB-X509 authenticates against the $external database, so specifying the authSource database is not required. For more information on the x509_derived_username, see the MongoDB server x.509 tutorial.

NOTE:

The MongoDB C Driver will attempt to determine the x509 derived username when none is provided, and as of MongoDB 3.4 providing the username is not required at all.


Authentication via AWS IAM

The MONGODB-AWS mechanism authenticates to MongoDB servers with credentials provided by AWS Identity and Access Management (IAM).

To authenticate, create a user with an associated Amazon Resource Name (ARN) on the $external database, and specify the MONGODB-AWS authMechanism in the URI.

mongoc_uri_t *uri = mongoc_uri_new ("mongodb://localhost/?authMechanism=MONGODB-AWS");


Since MONGODB-AWS always authenticates against the $external database, so specifying the authSource database is not required.

Credentials include the access key id, secret access key, and optional session token. They may be obtained from the following ways.

AWS credentials via URI

Credentials may be passed directly in the URI as username/password.

mongoc_uri_t *uri = mongoc_uri_new ("mongodb://<access key id>:<secret access key>localhost/?authMechanism=MONGODB-AWS");


This may include a session token passed with authMechanismProperties.

mongoc_uri_t *uri = mongoc_uri_new ("mongodb://<access key id>:<secret access key>localhost/?authMechanism=MONGODB-AWS&authMechanismProperties=AWS_SESSION_TOKEN:<token>");


AWS credentials via environment

If credentials are not passed through the URI, libmongoc will check for the following environment variables.

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_SESSION_TOKEN (optional)

AWS Credentials via ECS

If credentials are not passed in the URI or with environment variables, libmongoc will check if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set, and if so, attempt to retrieve temporary credentials from the ECS task metadata by querying a link local address.

AWS Credentials via EC2

If credentials are not passed in the URI or with environment variables, and the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is not set, libmongoc will attempt to retrieve temporary credentials from the EC2 machine metadata by querying link local addresses.

Basic Troubleshooting

Troubleshooting Checklist

The following is a short list of things to check when you have a problem.

  • Did you call mongoc_init() in main()? If not, you will likely see a segfault.
  • Have you leaked any clients or cursors as can be found with mongoc-stat <PID>?
  • Have packets been delivered to the server? See egress bytes from mongoc-stat <PID>.
  • Does valgrind show any leaks? Ensure you call mongoc_cleanup() at the end of your process to cleanup lingering allocations from the MongoDB C driver.
  • If compiling your own copy of MongoDB C Driver, consider using the cmake option -DENABLE_TRACING=ON to enable function tracing and hex dumps of network packets to STDERR and STDOUT.

Performance Counters

The MongoDB C driver comes with an optional unique feature to help developers and sysadmins troubleshoot problems in production. Performance counters are available for each process using the driver. If available, the counters can be accessed outside of the application process via a shared memory segment. This means that you can graph statistics about your application process easily from tools like Munin or Nagios. Your author often uses watch --interval=0.5 -d mongoc-stat $PID to monitor an application.

Performance counters are only available on Linux platforms and macOS arm64 platforms supporting shared memory segments. On supported platforms they are enabled by default. Applications can be built without the counters by specifying the cmake option -DENABLE_SHM_COUNTERS=OFF. Additionally, if performance counters are already compiled, they can be disabled at runtime by specifying the environment variable MONGOC_DISABLE_SHM.

Performance counters keep track of the following:

  • Active and Disposed Cursors
  • Active and Disposed Clients, Client Pools, and Socket Streams.
  • Number of operations sent and received, by type.
  • Bytes transferred and received.
  • Authentication successes and failures.
  • Number of wire protocol errors.

To access counters for a given process, simply provide the process id to the mongoc-stat program installed with the MongoDB C Driver.

$ mongoc-stat 22203

Operations : Egress Total : The number of sent operations. : 13247
Operations : Ingress Total : The number of received operations. : 13246
Operations : Egress Queries : The number of sent Query operations. : 13247
Operations : Ingress Queries : The number of received Query operations. : 0
Operations : Egress GetMore : The number of sent GetMore operations. : 0
Operations : Ingress GetMore : The number of received GetMore operations. : 0
Operations : Egress Insert : The number of sent Insert operations. : 0
Operations : Ingress Insert : The number of received Insert operations. : 0
Operations : Egress Delete : The number of sent Delete operations. : 0
Operations : Ingress Delete : The number of received Delete operations. : 0
Operations : Egress Update : The number of sent Update operations. : 0
Operations : Ingress Update : The number of received Update operations. : 0
Operations : Egress KillCursors : The number of sent KillCursors operations. : 0
Operations : Ingress KillCursors : The number of received KillCursors operations. : 0
Operations : Egress Msg : The number of sent Msg operations. : 0
Operations : Ingress Msg : The number of received Msg operations. : 0
Operations : Egress Reply : The number of sent Reply operations. : 0
Operations : Ingress Reply : The number of received Reply operations. : 13246
Cursors : Active : The number of active cursors. : 1
Cursors : Disposed : The number of disposed cursors. : 13246
Clients : Active : The number of active clients. : 1
Clients : Disposed : The number of disposed clients. : 0
Streams : Active : The number of active streams. : 1
Streams : Disposed : The number of disposed streams. : 0
Streams : Egress Bytes : The number of bytes sent. : 794931
Streams : Ingress Bytes : The number of bytes received. : 589694
Streams : N Socket Timeouts : The number of socket timeouts. : 0
Client Pools : Active : The number of active client pools. : 1
Client Pools : Disposed : The number of disposed client pools. : 0
Protocol : Ingress Errors : The number of protocol errors on ingress. : 0
Auth : Failures : The number of failed authentication requests. : 0
Auth : Success : The number of successful authentication requests. : 0


Submitting a Bug Report

Think you've found a bug? Want to see a new feature in the MongoDB C driver? Please open a case in our issue management tool, JIRA:

  • Create an account and login.
  • Navigate to the CDRIVER project.
  • Click Create Issue - Please provide as much information as possible about the issue type and how to reproduce it.

Bug reports in JIRA for all driver projects (i.e. CDRIVER, CSHARP, JAVA) and the Core Server (i.e. SERVER) project are public.

Guides

Configuring TLS

Configuration with URI options

Enable TLS by including tls=true in the URI.

mongoc_uri_t *uri = mongoc_uri_new ("mongodb://localhost:27017/");
mongoc_uri_set_option_as_bool (uri, MONGOC_URI_TLS, true);
mongoc_client_t *client = mongoc_client_new_from_uri (uri);


The following URI options may be used to further configure TLS:

Constant Key Description
MONGOC_URI_TLS tls {true|false}, indicating if TLS must be used.
MONGOC_URI_TLSCERTIFICATEKEYFILE tlscertificatekeyfile Path to PEM formatted Private Key, with its Public Certificate concatenated at the end.
MONGOC_URI_TLSCERTIFICATEKEYFILEPASSWORD tlscertificatekeypassword The password, if any, to use to unlock encrypted Private Key.
MONGOC_URI_TLSCAFILE tlscafile One, or a bundle of, Certificate Authorities whom should be considered to be trusted.
MONGOC_URI_TLSALLOWINVALIDCERTIFICATES tlsallowinvalidcertificates Accept and ignore certificate verification errors (e.g. untrusted issuer, expired, etc.)
MONGOC_URI_TLSALLOWINVALIDHOSTNAMES tlsallowinvalidhostnames Ignore hostname verification of the certificate (e.g. Man In The Middle, using valid certificate, but issued for another hostname)
MONGOC_URI_TLSINSECURE tlsinsecure {true|false}, indicating if insecure TLS options should be used. Currently this implies MONGOC_URI_TLSALLOWINVALIDCERTIFICATES and MONGOC_URI_TLSALLOWINVALIDHOSTNAMES.
MONGOC_URI_TLSDISABLECERTIFICATEREVOCATIONCHECK tlsdisablecertificaterevocationcheck {true|false}, indicates if revocation checking (CRL / OCSP) should be disabled.
MONGOC_URI_TLSDISABLEOCSPENDPOINTCHECK tlsdisableocspendpointcheck {true|false}, indicates if OCSP responder endpoints should not be requested when an OCSP response is not stapled.

Configuration with mongoc_ssl_opt_t

Alternatively, the mongoc_ssl_opt_t struct may be used to configure TLS with mongoc_client_set_ssl_opts() or mongoc_client_pool_set_ssl_opts(). Most of the configurable options can be set using the Connection String URI.

mongoc_ssl_opt_t key URI key
pem_file tlsClientCertificateKeyFile
pem_pwd tlsClientCertificateKeyPassword
ca_file tlsCAFile
weak_cert_validation tlsAllowInvalidCertificates
allow_invalid_hostname tlsAllowInvalidHostnames

The only exclusions are crl_file and ca_dir. Those may only be set with mongoc_ssl_opt_t.

Client Authentication

When MongoDB is started with TLS enabled, it will by default require the client to provide a client certificate issued by a certificate authority specified by --tlsCAFile, or an authority trusted by the native certificate store in use on the server.

To provide the client certificate, set the tlsCertificateKeyFile in the URI to a PEM armored certificate file.

mongoc_uri_t *uri = mongoc_uri_new ("mongodb://localhost:27017/");
mongoc_uri_set_option_as_bool (uri, MONGOC_URI_TLS, true);
mongoc_uri_set_option_as_utf8 (uri, MONGOC_URI_TLSCERTIFICATEKEYFILE, "/path/to/client-certificate.pem");
mongoc_client_t *client = mongoc_client_new_from_uri (uri);


Server Certificate Verification

The MongoDB C Driver will automatically verify the validity of the server certificate, such as issued by configured Certificate Authority, hostname validation, and expiration.

To overwrite this behavior, it is possible to disable hostname validation, OCSP endpoint revocation checking, revocation checking entirely, and allow invalid certificates.

This behavior is controlled using the tlsAllowInvalidHostnames, tlsDisableOCSPEndpointCheck, tlsDisableCertificateRevocationCheck, and tlsAllowInvalidCertificates options respectively. By default, all are set to false.

It is not recommended to change these defaults as it exposes the client to Man In The Middle attacks (when tlsAllowInvalidHostnames is set), invalid certificates (when tlsAllowInvalidCertificates is set), or potentially revoked certificates (when tlsDisableOCSPEndpointCheck or tlsDisableCertificateRevocationCheck are set).

Supported Libraries

By default, libmongoc will attempt to find a supported TLS library and enable TLS support. This is controlled by the cmake flag ENABLE_SSL, which is set to AUTO by default. Valid values are:

  • AUTO the default behavior. Link to the system's native TLS library, or attempt to find OpenSSL.
  • DARWIN link to Secure Transport, the native TLS library on macOS.
  • WINDOWS link to Secure Channel, the native TLS library on Windows.
  • OPENSSL link to OpenSSL (libssl). An optional install path may be specified with OPENSSL_ROOT.
  • LIBRESSL link to LibreSSL's libtls. (LibreSSL's compatible libssl may be linked to by setting OPENSSL).
  • OFF disable TLS support.

OpenSSL

The MongoDB C Driver uses OpenSSL, if available, on Linux and Unix platforms (besides macOS). Industry best practices and some regulations require the use of TLS 1.1 or newer, which requires at least OpenSSL 1.0.1. Check your OpenSSL version like so:

$ openssl version


Ensure your system's OpenSSL is a recent version (at least 1.0.1), or install a recent version in a non-system path and build against it with:

cmake -DOPENSSL_ROOT_DIR=/absolute/path/to/openssl


When compiled against OpenSSL, the driver will attempt to load the system default certificate store, as configured by the distribution. That can be overridden by setting the tlsCAFile URI option or with the fields ca_file and ca_dir in the mongoc_ssl_opt_t.

The Online Certificate Status Protocol (OCSP) (see RFC 6960) is fully supported when using OpenSSL 1.0.1+ with the following notes:

When a crl_file is set with mongoc_ssl_opt_t, and the crl_file revokes the server's certificate, the certificate is considered revoked (even if the certificate has a valid stapled OCSP response)

LibreSSL / libtls

The MongoDB C Driver supports LibreSSL through the use of OpenSSL compatibility checks when configured to compile against openssl. It also supports the new libtls library when configured to build against libressl.

When compiled against the Windows native libraries, the crl_file option of a mongoc_ssl_opt_t is not supported, and will issue an error if used.

Setting tlsDisableOCSPEndpointCheck and tlsDisableCertificateRevocationCheck has no effect.

The Online Certificate Status Protocol (OCSP) (see RFC 6960) is partially supported with the following notes:

  • The Must-Staple extension (see RFC 7633) is ignored. Connection may continue if a Must-Staple certificate is presented with no stapled response (unless the client receives a revoked response from an OCSP responder).
  • Connection will continue if a Must-Staple certificate is presented without a stapled response and the OCSP responder is down.

Native TLS Support on Windows (Secure Channel)

The MongoDB C Driver supports the Windows native TLS library (Secure Channel, or SChannel), and its native crypto library (Cryptography API: Next Generation, or CNG).

When compiled against the Windows native libraries, the ca_dir option of a mongoc_ssl_opt_t is not supported, and will issue an error if used.

Encrypted PEM files (e.g., setting tlsCertificateKeyPassword) are also not supported, and will result in error when attempting to load them.

When tlsCAFile is set, the driver will only allow server certificates issued by the authority (or authorities) provided. When no tlsCAFile is set, the driver will look up the Certificate Authority using the System Local Machine Root certificate store to confirm the provided certificate.

When crl_file is set with mongoc_ssl_opt_t, the driver will import the revocation list to the System Local Machine Root certificate store.

Setting tlsDisableOCSPEndpointCheck has no effect.

The Online Certificate Status Protocol (OCSP) (see RFC 6960) is partially supported with the following notes:

  • The Must-Staple extension (see RFC 7633) is ignored. Connection may continue if a Must-Staple certificate is presented with no stapled response (unless the client receives a revoked response from an OCSP responder).
  • When a crl_file is set with mongoc_ssl_opt_t, and the crl_file revokes the server's certificate, the OCSP response takes precedence. E.g. if the server presents a certificate with a valid stapled OCSP response, the certificate is considered valid even if the crl_file marks it as revoked.
  • Connection will continue if a Must-Staple certificate is presented without a stapled response and the OCSP responder is down.

Native TLS Support on macOS / Darwin (Secure Transport)

The MongoDB C Driver supports the Darwin (OS X, macOS, iOS, etc.) native TLS library (Secure Transport), and its native crypto library (Common Crypto, or CC).

When compiled against Secure Transport, the ca_dir and crl_file options of a mongoc_ssl_opt_t are not supported. An error is issued if either are used.

When tlsCAFile is set, the driver will only allow server certificates issued by the authority (or authorities) provided. When no tlsCAFile is set, the driver will use the Certificate Authorities in the currently unlocked keychains.

Setting tlsDisableOCSPEndpointCheck and tlsDisableCertificateRevocationCheck has no effect.

The Online Certificate Status Protocol (OCSP) (see RFC 6960) is partially supported with the following notes.

  • The Must-Staple extension (see RFC 7633) is ignored. Connection may continue if a Must-Staple certificate is presented with no stapled response (unless the client receives a revoked response from an OCSP responder).
  • Connection will continue if a Must-Staple certificate is presented without a stapled response and the OCSP responder is down.

Common Tasks

Drivers for some other languages provide helper functions to perform certain common tasks. In the C Driver we must explicitly build commands to send to the server.

Setup

First we'll write some code to insert sample data:

doc-common-insert.c

/* Don't try to compile this file on its own. It's meant to be #included

by example code */ /* Insert some sample data */ bool insert_data (mongoc_collection_t *collection) {
mongoc_bulk_operation_t *bulk;
enum N { ndocs = 4 };
bson_t *docs[ndocs];
bson_error_t error;
int i = 0;
bool ret;
bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL);
docs[0] = BCON_NEW ("x", BCON_DOUBLE (1.0), "tags", "[", "dog", "cat", "]");
docs[1] = BCON_NEW ("x", BCON_DOUBLE (2.0), "tags", "[", "cat", "]");
docs[2] = BCON_NEW (
"x", BCON_DOUBLE (2.0), "tags", "[", "mouse", "cat", "dog", "]");
docs[3] = BCON_NEW ("x", BCON_DOUBLE (3.0), "tags", "[", "]");
for (i = 0; i < ndocs; i++) {
mongoc_bulk_operation_insert (bulk, docs[i]);
bson_destroy (docs[i]);
docs[i] = NULL;
}
ret = mongoc_bulk_operation_execute (bulk, NULL, &error);
if (!ret) {
fprintf (stderr, "Error inserting data: %s\n", error.message);
}
mongoc_bulk_operation_destroy (bulk);
return ret; } /* A helper which we'll use a lot later on */ void print_res (const bson_t *reply) {
char *str;
BSON_ASSERT (reply);
str = bson_as_canonical_extended_json (reply, NULL);
printf ("%s\n", str);
bson_free (str); }


explain Command

This is how to use the explain command in MongoDB 3.2+:

explain.c

bool
explain (mongoc_collection_t *collection)
{

bson_t *command;
bson_t reply;
bson_error_t error;
bool res;
command = BCON_NEW ("explain",
"{",
"find",
BCON_UTF8 (COLLECTION_NAME),
"filter",
"{",
"x",
BCON_INT32 (1),
"}",
"}");
res = mongoc_collection_command_simple (
collection, command, NULL, &reply, &error);
if (!res) {
fprintf (stderr, "Error with explain: %s\n", error.message);
goto cleanup;
}
/* Do something with the reply */
print_res (&reply); cleanup:
bson_destroy (&reply);
bson_destroy (command);
return res; }


Running the Examples

common-operations.c

/*

* Copyright 2016 MongoDB, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/ #include <mongoc/mongoc.h> #include <stdio.h> const char *COLLECTION_NAME = "things"; #include "../doc-common-insert.c" #include "explain.c" int main (int argc, char *argv[]) {
mongoc_database_t *database = NULL;
mongoc_client_t *client = NULL;
mongoc_collection_t *collection = NULL;
mongoc_uri_t *uri = NULL;
bson_error_t error;
char *host_and_port;
int res = 0;
if (argc < 2 || argc > 3) {
fprintf (stderr,
"usage: %s MONGOD-1-CONNECTION-STRING "
"[MONGOD-2-HOST-NAME:MONGOD-2-PORT]\n",
argv[0]);
fprintf (stderr,
"MONGOD-1-CONNECTION-STRING can be "
"of the following forms:\n");
fprintf (stderr, "localhost\t\t\t\tlocal machine\n");
fprintf (stderr, "localhost:27018\t\t\t\tlocal machine on port 27018\n");
fprintf (stderr,
"mongodb://user:pass@localhost:27017\t"
"local machine on port 27017, and authenticate with username "
"user and password pass\n");
return EXIT_FAILURE;
}
mongoc_init ();
if (strncmp (argv[1], "mongodb://", 10) == 0) {
host_and_port = bson_strdup (argv[1]);
} else {
host_and_port = bson_strdup_printf ("mongodb://%s", argv[1]);
}
uri = mongoc_uri_new_with_error (host_and_port, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
host_and_port,
error.message);
res = EXIT_FAILURE;
goto cleanup;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
res = EXIT_FAILURE;
goto cleanup;
}
mongoc_client_set_error_api (client, 2);
database = mongoc_client_get_database (client, "test");
collection = mongoc_database_get_collection (database, COLLECTION_NAME);
printf ("Inserting data\n");
if (!insert_data (collection)) {
res = EXIT_FAILURE;
goto cleanup;
}
printf ("explain\n");
if (!explain (collection)) {
res = EXIT_FAILURE;
goto cleanup;
} cleanup:
if (collection) {
mongoc_collection_destroy (collection);
}
if (database) {
mongoc_database_destroy (database);
}
if (client) {
mongoc_client_destroy (client);
}
if (uri) {
mongoc_uri_destroy (uri);
}
bson_free (host_and_port);
mongoc_cleanup ();
return res; }


First launch two separate instances of mongod (must be done from separate shells):

$ mongod


$ mkdir /tmp/db2
$ mongod --dbpath /tmp/db2 --port 27018 # second instance


Now compile and run the example program:

$ cd examples/common_operations/$ gcc -Wall -o example common-operations.c $(pkg-config --cflags --libs libmongoc-1.0)$ ./example localhost:27017 localhost:27018
Inserting data
explain
{

"executionStats" : {
"allPlansExecution" : [],
"executionStages" : {
"advanced" : 19,
"direction" : "forward" ,
"docsExamined" : 76,
"executionTimeMillisEstimate" : 0,
"filter" : {
"x" : {
"$eq" : 1
}
},
"invalidates" : 0,
"isEOF" : 1,
"nReturned" : 19,
"needTime" : 58,
"needYield" : 0,
"restoreState" : 0,
"saveState" : 0,
"stage" : "COLLSCAN" ,
"works" : 78
},
"executionSuccess" : true,
"executionTimeMillis" : 0,
"nReturned" : 19,
"totalDocsExamined" : 76,
"totalKeysExamined" : 0
},
"ok" : 1,
"queryPlanner" : {
"indexFilterSet" : false,
"namespace" : "test.things",
"parsedQuery" : {
"x" : {
"$eq" : 1
}
},
"plannerVersion" : 1,
"rejectedPlans" : [],
"winningPlan" : {
"direction" : "forward" ,
"filter" : {
"x" : {
"$eq" : 1
}
},
"stage" : "COLLSCAN"
}
},
"serverInfo" : {
"gitVersion" : "05552b562c7a0b3143a729aaa0838e558dc49b25" ,
"host" : "MacBook-Pro-57.local",
"port" : 27017,
"version" : "3.2.6"
} }


Advanced Connections

The following guide contains information specific to certain types of MongoDB configurations.

For an example of connecting to a simple standalone server, see the Tutorial. To establish a connection with authentication options enabled, see the Authentication page.

Connecting to a Replica Set

Connecting to a replica set is much like connecting to a standalone MongoDB server. Simply specify the replica set name using the ?replicaSet=myreplset URI option.

#include <bson/bson.h>
#include <mongoc/mongoc.h>
int
main (int argc, char *argv[])
{

mongoc_client_t *client;
mongoc_init ();
/* Create our MongoDB Client */
client = mongoc_client_new (
"mongodb://host01:27017,host02:27017,host03:27017/?replicaSet=myreplset");
/* Do some work */
/* TODO */
/* Clean up */
mongoc_client_destroy (client);
mongoc_cleanup ();
return 0; }


TIP:

Multiple hostnames can be specified in the MongoDB connection string URI, with a comma separating hosts in the seed list.

It is recommended to use a seed list of members of the replica set to allow the driver to connect to any node.



Connecting to a Sharded Cluster

To connect to a sharded cluster, specify the mongos nodes the client should connect to. The C Driver will automatically detect that it has connected to a mongos sharding server.

If more than one hostname is specified, a seed list will be created to attempt failover between the mongos instances.

WARNING:

Specifying the replicaSet parameter when connecting to a mongos sharding server is invalid.


#include <bson/bson.h>
#include <mongoc/mongoc.h>
int
main (int argc, char *argv[])
{

mongoc_client_t *client;
mongoc_init ();
/* Create our MongoDB Client */
client = mongoc_client_new ("mongodb://myshard01:27017/");
/* Do something with client ... */
/* Free the client */
mongoc_client_destroy (client);
mongoc_cleanup ();
return 0; }


Connecting to an IPv6 Address

The MongoDB C Driver will automatically resolve IPv6 addresses from host names. However, to specify an IPv6 address directly, wrap the address in [].

mongoc_uri_t *uri = mongoc_uri_new ("mongodb://[::1]:27017");


Connecting with IPv4 and IPv6

If connecting to a hostname that has both IPv4 and IPv6 DNS records, the behavior follows RFC-6555. A connection to the IPv6 address is attempted first. If IPv6 fails, then a connection is attempted to the IPv4 address. If the connection attempt to IPv6 does not complete within 250ms, then IPv4 is tried in parallel. Whichever succeeds connection first cancels the other. The successful DNS result is cached for 10 minutes.

As a consequence, attempts to connect to a mongod only listening on IPv4 may be delayed if there are both A (IPv4) and AAAA (IPv6) DNS records associated with the host.

To avoid a delay, configure hostnames to match the MongoDB configuration. That is, only create an A record if the mongod is only listening on IPv4.

Connecting to a UNIX Domain Socket

On UNIX-like systems, the C Driver can connect directly to a MongoDB server using a UNIX domain socket. Pass the URL-encoded path to the socket, which must be suffixed with .sock. For example, to connect to a domain socket at /tmp/mongodb-27017.sock:

mongoc_uri_t *uri = mongoc_uri_new ("mongodb://%2Ftmp%2Fmongodb-27017.sock");


Include username and password like so:

mongoc_uri_t *uri = mongoc_uri_new ("mongodb://user:pass@%2Ftmp%2Fmongodb-27017.sock");


Connecting to a server over TLS

These are instructions for configuring TLS/SSL connections.

To run a server locally (on port 27017, for example):

$ mongod --port 27017 --tlsMode requireTLS --tlsCertificateKeyFile server.pem --tlsCAFile ca.pem


Add /?tls=true to the end of a client URI.

mongoc_client_t *client = NULL;
client = mongoc_client_new ("mongodb://localhost:27017/?tls=true");


MongoDB requires client certificates by default, unless the --tlsAllowConnectionsWithoutCertificates is provided. The C Driver can be configured to present a client certificate using the URI option tlsCertificateKeyFile, which may be referenced through the constant MONGOC_URI_TLSCERTIFICATEKEYFILE.

mongoc_client_t *client = NULL;
mongoc_uri_t *uri = mongoc_uri_new ("mongodb://localhost:27017/?tls=true");
mongoc_uri_set_option_as_utf8 (uri, MONGOC_URI_TLSCERTIFICATEKEYFILE, "client.pem");
client = mongoc_client_new_from_uri (uri);


The client certificate provided by tlsCertificateKeyFile must be issued by one of the server trusted Certificate Authorities listed in --tlsCAFile, or issued by a CA in the native certificate store on the server when omitted.

See Configuring TLS for more information on the various TLS related options.

Compressing data to and from MongoDB

MongoDB 3.4 added Snappy compression support, zlib compression in 3.6, and zstd compression in 4.2. To enable compression support the client must be configured with which compressors to use:

mongoc_client_t *client = NULL;
client = mongoc_client_new ("mongodb://localhost:27017/?compressors=snappy,zlib,zstd");


The compressors option specifies the priority order of compressors the client wants to use. Messages are compressed if the client and server share any compressors in common.

Note that the compressor used by the server might not be the same compressor as the client used. For example, if the client uses the connection string compressors=zlib,snappy the client will use zlib compression to send data (if possible), but the server might still reply using snappy, depending on how the server was configured.

The driver must be built with zlib and/or snappy and/or zstd support to enable compression support, any unknown (or not compiled in) compressor value will be ignored. Note: to build with zstd requires cmake 3.12 or higher.

Additional Connection Options

The full list of connection options can be found in the mongoc_uri_t docs.

Certain socket/connection related options are not configurable:

Option Description Value
SO_KEEPALIVE TCP Keep Alive Enabled
TCP_KEEPIDLE How long a connection needs to remain idle before TCP starts sending keepalive probes 120 seconds
TCP_KEEPINTVL The time in seconds between TCP probes 10 seconds
TCP_KEEPCNT How many probes to send, without acknowledgement, before dropping the connection 9 probes
TCP_NODELAY Send packets as soon as possible or buffer small packets (Nagle algorithm) Enabled (no buffering)

Connection Pooling

The MongoDB C driver has two connection modes: single-threaded and pooled. Single-threaded mode is optimized for embedding the driver within languages like PHP. Multi-threaded programs should use pooled mode: this mode minimizes the total connection count, and in pooled mode background threads monitor the MongoDB server topology, so the program need not block to scan it.

Single Mode

In single mode, your program creates a mongoc_client_t directly:

mongoc_client_t *client = mongoc_client_new (

"mongodb://hostA,hostB/?replicaSet=my_rs");


The client connects on demand when your program first uses it for a MongoDB operation. Using a non-blocking socket per server, it begins a check on each server concurrently, and uses the asynchronous poll or select function to receive events from the sockets, until all have responded or timed out. Put another way, in single-threaded mode the C Driver fans out to begin all checks concurrently, then fans in once all checks have completed or timed out. Once the scan completes, the client executes your program's operation and returns.

In single mode, the client re-scans the server topology roughly once per minute. If more than a minute has elapsed since the previous scan, the next operation on the client will block while the client completes its scan. This interval is configurable with heartbeatFrequencyMS in the connection string. (See mongoc_uri_t.)

A single client opens one connection per server in your topology: these connections are used both for scanning the topology and performing normal operations.

Pooled Mode

To activate pooled mode, create a mongoc_client_pool_t:

mongoc_uri_t *uri = mongoc_uri_new (

"mongodb://hostA,hostB/?replicaSet=my_rs"); mongoc_client_pool_t *pool = mongoc_client_pool_new (uri);


When your program first calls mongoc_client_pool_pop(), the pool launches monitoring threads in the background. Monitoring threads independently connect to all servers in the connection string. As monitoring threads receive hello responses from the servers, they update the shared view of the server topology. Additional monitoring threads and connections are created as new servers are discovered. Monitoring threads are terminated when servers are removed from the shared view of the server topology.

Each thread that executes MongoDB operations must check out a client from the pool:

mongoc_client_t *client = mongoc_client_pool_pop (pool);
/* use the client for operations ... */
mongoc_client_pool_push (pool, client);


The mongoc_client_t object is not thread-safe, only the mongoc_client_pool_t is.

When the driver is in pooled mode, your program's operations are unblocked as soon as monitoring discovers a usable server. For example, if a thread in your program is waiting to execute an "insert" on the primary, it is unblocked as soon as the primary is discovered, rather than waiting for all secondaries to be checked as well.

The pool opens one connection per server for monitoring, and each client opens its own connection to each server it uses for application operations. Background monitoring threads re-scan servers independently roughly every 10 seconds. This interval is configurable with heartbeatFrequencyMS in the connection string. (See mongoc_uri_t.)

The connection string can also specify waitQueueTimeoutMS to limit the time that mongoc_client_pool_pop() will wait for a client from the pool. (See mongoc_uri_t.) If waitQueueTimeoutMS is specified, then it is necessary to confirm that a client was actually returned:

mongoc_uri_t *uri = mongoc_uri_new (

"mongodb://hostA,hostB/?replicaSet=my_rs&waitQueueTimeoutMS=1000"); mongoc_client_pool_t *pool = mongoc_client_pool_new (uri); mongoc_client_t *client = mongoc_client_pool_pop (pool); if (client) {
/* use the client for operations ... */
mongoc_client_pool_push (pool, client); } else {
/* take appropriate action for a timeout */ }


See Connection Pool Options to configure pool size and behavior, and see mongoc_client_pool_t for an extended example of a multi-threaded program that uses the driver in pooled mode.

Cursors

Handling Cursor Failures

Cursors exist on a MongoDB server. However, the mongoc_cursor_t structure gives the local process a handle to the cursor. It is possible for errors to occur on the server while iterating a cursor on the client. Even a network partition may occur. This means that applications should be robust in handling cursor failures.

While iterating cursors, you should check to see if an error has occurred. See the following example for how to robustly check for errors.

static void
print_all_documents (mongoc_collection_t *collection)
{

mongoc_cursor_t *cursor;
const bson_t *doc;
bson_error_t error;
bson_t query = BSON_INITIALIZER;
char *str;
cursor = mongoc_collection_find_with_opts (collection, query, NULL, NULL);
while (mongoc_cursor_next (cursor, &doc)) {
str = bson_as_canonical_extended_json (doc, NULL);
printf ("%s\n", str);
bson_free (str);
}
if (mongoc_cursor_error (cursor, &error)) {
fprintf (stderr, "Failed to iterate all documents: %s\n", error.message);
}
mongoc_cursor_destroy (cursor); }


Destroying Server-Side Cursors

The MongoDB C driver will automatically destroy a server-side cursor when mongoc_cursor_destroy() is called. Failure to call this function when done with a cursor will leak memory client side as well as consume extra memory server side. If the cursor was configured to never timeout, it will become a memory leak on the server.

Tailable Cursors

Tailable cursors are cursors that remain open even after they've returned a final result. This way, if more documents are added to a collection (i.e., to the cursor's result set), then you can continue to call mongoc_cursor_next() to retrieve those additional results.

Here's a complete test case that demonstrates the use of tailable cursors.

NOTE:

Tailable cursors are for capped collections only.


An example to tail the oplog from a replica set.

mongoc-tail.c

#include <bson/bson.h>
#include <mongoc/mongoc.h>
#include <stdio.h>
#include <stdlib.h>
#ifdef _WIN32
#define sleep(_n) Sleep ((_n) *1000)
#endif
static void
print_bson (const bson_t *b)
{

char *str;
str = bson_as_canonical_extended_json (b, NULL);
fprintf (stdout, "%s\n", str);
bson_free (str); } static mongoc_cursor_t * query_collection (mongoc_collection_t *collection, uint32_t last_time) {
mongoc_cursor_t *cursor;
bson_t query;
bson_t gt;
bson_t opts;
BSON_ASSERT (collection);
bson_init (&query);
BSON_APPEND_DOCUMENT_BEGIN (&query, "ts", &gt);
BSON_APPEND_TIMESTAMP (&gt, "$gt", last_time, 0);
bson_append_document_end (&query, &gt);
bson_init (&opts);
BSON_APPEND_BOOL (&opts, "tailable", true);
BSON_APPEND_BOOL (&opts, "awaitData", true);
cursor = mongoc_collection_find_with_opts (collection, &query, &opts, NULL);
bson_destroy (&query);
bson_destroy (&opts);
return cursor; } static void tail_collection (mongoc_collection_t *collection) {
mongoc_cursor_t *cursor;
uint32_t last_time;
const bson_t *doc;
bson_error_t error;
bson_iter_t iter;
BSON_ASSERT (collection);
last_time = (uint32_t) time (NULL);
while (true) {
cursor = query_collection (collection, last_time);
while (!mongoc_cursor_error (cursor, &error) &&
mongoc_cursor_more (cursor)) {
if (mongoc_cursor_next (cursor, &doc)) {
if (bson_iter_init_find (&iter, doc, "ts") &&
BSON_ITER_HOLDS_TIMESTAMP (&iter)) {
bson_iter_timestamp (&iter, &last_time, NULL);
}
print_bson (doc);
}
}
if (mongoc_cursor_error (cursor, &error)) {
if (error.domain == MONGOC_ERROR_SERVER) {
fprintf (stderr, "%s\n", error.message);
exit (1);
}
}
mongoc_cursor_destroy (cursor);
sleep (1);
} } int main (int argc, char *argv[]) {
mongoc_collection_t *collection;
mongoc_client_t *client;
mongoc_uri_t *uri;
bson_error_t error;
if (argc != 2) {
fprintf (stderr, "usage: %s MONGO_URI\n", argv[0]);
return EXIT_FAILURE;
}
mongoc_init ();
uri = mongoc_uri_new_with_error (argv[1], &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
argv[1],
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
mongoc_client_set_error_api (client, 2);
collection = mongoc_client_get_collection (client, "local", "oplog.rs");
tail_collection (collection);
mongoc_collection_destroy (collection);
mongoc_uri_destroy (uri);
mongoc_client_destroy (client);
return EXIT_SUCCESS; }


Let's compile and run this example against a replica set to see updates as they are made.

$ gcc -Wall -o mongoc-tail mongoc-tail.c $(pkg-config --cflags --libs libmongoc-1.0)
$ ./mongoc-tail mongodb://example.com/?replicaSet=myReplSet
{

"h" : -8458503739429355503,
"ns" : "test.test",
"o" : {
"_id" : {
"$oid" : "5372ab0a25164be923d10d50"
}
},
"op" : "i",
"ts" : {
"$timestamp" : {
"i" : 1,
"t" : 1400023818
}
},
"v" : 2 }


The line of output is a sample from performing db.test.insert({}) from the mongo shell on the replica set.

SEE ALSO:

mongoc_cursor_set_max_await_time_ms().



Bulk Write Operations

This tutorial explains how to take advantage of MongoDB C driver bulk write operation features. Executing write operations in batches reduces the number of network round trips, increasing write throughput.

Bulk Insert

First we need to fetch a bulk operation handle from the mongoc_collection_t.

mongoc_bulk_operation_t *bulk =

mongoc_collection_create_bulk_operation_with_opts (collection, NULL);


We can now start inserting documents to the bulk operation. These will be buffered until we execute the operation.

The bulk operation will coalesce insertions as a single batch for each consecutive call to mongoc_bulk_operation_insert(). This creates a pipelined effect when possible.

To execute the bulk operation and receive the result we call mongoc_bulk_operation_execute().

bulk1.c

#include <assert.h>
#include <mongoc/mongoc.h>
#include <stdio.h>
static void
bulk1 (mongoc_collection_t *collection)
{

mongoc_bulk_operation_t *bulk;
bson_error_t error;
bson_t *doc;
bson_t reply;
char *str;
bool ret;
int i;
bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL);
for (i = 0; i < 10000; i++) {
doc = BCON_NEW ("i", BCON_INT32 (i));
mongoc_bulk_operation_insert (bulk, doc);
bson_destroy (doc);
}
ret = mongoc_bulk_operation_execute (bulk, &reply, &error);
str = bson_as_canonical_extended_json (&reply, NULL);
printf ("%s\n", str);
bson_free (str);
if (!ret) {
fprintf (stderr, "Error: %s\n", error.message);
}
bson_destroy (&reply);
mongoc_bulk_operation_destroy (bulk); } int main (void) {
mongoc_client_t *client;
mongoc_collection_t *collection;
const char *uri_string = "mongodb://localhost/?appname=bulk1-example";
mongoc_uri_t *uri;
bson_error_t error;
mongoc_init ();
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
mongoc_client_set_error_api (client, 2);
collection = mongoc_client_get_collection (client, "test", "test");
bulk1 (collection);
mongoc_uri_destroy (uri);
mongoc_collection_destroy (collection);
mongoc_client_destroy (client);
mongoc_cleanup ();
return EXIT_SUCCESS; }


Example reply document:

{"nInserted"   : 10000,

"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"nUpserted" : 0,
"writeErrors" : []
"writeConcernErrors" : [] }


Mixed Bulk Write Operations

MongoDB C driver also supports executing mixed bulk write operations. A batch of insert, update, and remove operations can be executed together using the bulk write operations API.

Ordered Bulk Write Operations

Ordered bulk write operations are batched and sent to the server in the order provided for serial execution. The reply document describes the type and count of operations performed.

bulk2.c

#include <assert.h>
#include <mongoc/mongoc.h>
#include <stdio.h>
static void
bulk2 (mongoc_collection_t *collection)
{

mongoc_bulk_operation_t *bulk;
bson_error_t error;
bson_t *query;
bson_t *doc;
bson_t *opts;
bson_t reply;
char *str;
bool ret;
int i;
bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL);
/* Remove everything */
query = bson_new ();
mongoc_bulk_operation_remove (bulk, query);
bson_destroy (query);
/* Add a few documents */
for (i = 1; i < 4; i++) {
doc = BCON_NEW ("_id", BCON_INT32 (i));
mongoc_bulk_operation_insert (bulk, doc);
bson_destroy (doc);
}
/* {_id: 1} => {$set: {foo: "bar"}} */
query = BCON_NEW ("_id", BCON_INT32 (1));
doc = BCON_NEW ("$set", "{", "foo", BCON_UTF8 ("bar"), "}");
mongoc_bulk_operation_update_many_with_opts (bulk, query, doc, NULL, &error);
bson_destroy (query);
bson_destroy (doc);
/* {_id: 4} => {'$inc': {'j': 1}} (upsert) */
opts = BCON_NEW ("upsert", BCON_BOOL (true));
query = BCON_NEW ("_id", BCON_INT32 (4));
doc = BCON_NEW ("$inc", "{", "j", BCON_INT32 (1), "}");
mongoc_bulk_operation_update_many_with_opts (bulk, query, doc, opts, &error);
bson_destroy (query);
bson_destroy (doc);
bson_destroy (opts);
/* replace {j:1} with {j:2} */
query = BCON_NEW ("j", BCON_INT32 (1));
doc = BCON_NEW ("j", BCON_INT32 (2));
mongoc_bulk_operation_replace_one_with_opts (bulk, query, doc, NULL, &error);
bson_destroy (query);
bson_destroy (doc);
ret = mongoc_bulk_operation_execute (bulk, &reply, &error);
str = bson_as_canonical_extended_json (&reply, NULL);
printf ("%s\n", str);
bson_free (str);
if (!ret) {
printf ("Error: %s\n", error.message);
}
bson_destroy (&reply);
mongoc_bulk_operation_destroy (bulk); } int main (void) {
mongoc_client_t *client;
mongoc_collection_t *collection;
const char *uri_string = "mongodb://localhost/?appname=bulk2-example";
mongoc_uri_t *uri;
bson_error_t error;
mongoc_init ();
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
mongoc_client_set_error_api (client, 2);
collection = mongoc_client_get_collection (client, "test", "test");
bulk2 (collection);
mongoc_uri_destroy (uri);
mongoc_collection_destroy (collection);
mongoc_client_destroy (client);
mongoc_cleanup ();
return EXIT_SUCCESS; }


Example reply document:

{ "nInserted"   : 3,

"nMatched" : 2,
"nModified" : 2,
"nRemoved" : 10000,
"nUpserted" : 1,
"upserted" : [{"index" : 5, "_id" : 4}],
"writeErrors" : []
"writeConcernErrors" : [] }


The index field in the upserted array is the 0-based index of the upsert operation; in this example, the sixth operation of the overall bulk operation was an upsert, so its index is 5.

Unordered Bulk Write Operations

Unordered bulk write operations are batched and sent to the server in arbitrary order where they may be executed in parallel. Any errors that occur are reported after all operations are attempted.

In the next example the first and third operations fail due to the unique constraint on _id. Since we are doing unordered execution the second and fourth operations succeed.

bulk3.c

#include <assert.h>
#include <mongoc/mongoc.h>
#include <stdio.h>
static void
bulk3 (mongoc_collection_t *collection)
{

bson_t opts = BSON_INITIALIZER;
mongoc_bulk_operation_t *bulk;
bson_error_t error;
bson_t *query;
bson_t *doc;
bson_t reply;
char *str;
bool ret;
/* false indicates unordered */
BSON_APPEND_BOOL (&opts, "ordered", false);
bulk = mongoc_collection_create_bulk_operation_with_opts (collection, &opts);
bson_destroy (&opts);
/* Add a document */
doc = BCON_NEW ("_id", BCON_INT32 (1));
mongoc_bulk_operation_insert (bulk, doc);
bson_destroy (doc);
/* remove {_id: 2} */
query = BCON_NEW ("_id", BCON_INT32 (2));
mongoc_bulk_operation_remove_one (bulk, query);
bson_destroy (query);
/* insert {_id: 3} */
doc = BCON_NEW ("_id", BCON_INT32 (3));
mongoc_bulk_operation_insert (bulk, doc);
bson_destroy (doc);
/* replace {_id:4} {'i': 1} */
query = BCON_NEW ("_id", BCON_INT32 (4));
doc = BCON_NEW ("i", BCON_INT32 (1));
mongoc_bulk_operation_replace_one (bulk, query, doc, false);
bson_destroy (query);
bson_destroy (doc);
ret = mongoc_bulk_operation_execute (bulk, &reply, &error);
str = bson_as_canonical_extended_json (&reply, NULL);
printf ("%s\n", str);
bson_free (str);
if (!ret) {
printf ("Error: %s\n", error.message);
}
bson_destroy (&reply);
mongoc_bulk_operation_destroy (bulk);
bson_destroy (&opts); } int main (void) {
mongoc_client_t *client;
mongoc_collection_t *collection;
const char *uri_string = "mongodb://localhost/?appname=bulk3-example";
mongoc_uri_t *uri;
bson_error_t error;
mongoc_init ();
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
mongoc_client_set_error_api (client, 2);
collection = mongoc_client_get_collection (client, "test", "test");
bulk3 (collection);
mongoc_uri_destroy (uri);
mongoc_collection_destroy (collection);
mongoc_client_destroy (client);
mongoc_cleanup ();
return EXIT_SUCCESS; }


Example reply document:

{ "nInserted"    : 0,

"nMatched" : 1,
"nModified" : 1,
"nRemoved" : 1,
"nUpserted" : 0,
"writeErrors" : [
{ "index" : 0,
"code" : 11000,
"errmsg" : "E11000 duplicate key error index: test.test.$_id_ dup key: { : 1 }" },
{ "index" : 2,
"code" : 11000,
"errmsg" : "E11000 duplicate key error index: test.test.$_id_ dup key: { : 3 }" } ],
"writeConcernErrors" : [] } Error: E11000 duplicate key error index: test.test.$_id_ dup key: { : 1 }


The bson_error_t domain is MONGOC_ERROR_COMMAND and its code is 11000.

Bulk Operation Bypassing Document Validation

This feature is only available when using MongoDB 3.2 and later.

By default bulk operations are validated against the schema, if any is defined. In certain cases however it may be necessary to bypass the document validation.

bulk5.c

#include <assert.h>
#include <mongoc/mongoc.h>
#include <stdio.h>
static void
bulk5_fail (mongoc_collection_t *collection)
{

mongoc_bulk_operation_t *bulk;
bson_error_t error;
bson_t *doc;
bson_t reply;
char *str;
bool ret;
bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL);
/* Two inserts */
doc = BCON_NEW ("_id", BCON_INT32 (31));
mongoc_bulk_operation_insert (bulk, doc);
bson_destroy (doc);
doc = BCON_NEW ("_id", BCON_INT32 (32));
mongoc_bulk_operation_insert (bulk, doc);
bson_destroy (doc);
/* The above documents do not comply to the schema validation rules
* we created previously, so this will result in an error */
ret = mongoc_bulk_operation_execute (bulk, &reply, &error);
str = bson_as_canonical_extended_json (&reply, NULL);
printf ("%s\n", str);
bson_free (str);
if (!ret) {
printf ("Error: %s\n", error.message);
}
bson_destroy (&reply);
mongoc_bulk_operation_destroy (bulk); } static void bulk5_success (mongoc_collection_t *collection) {
mongoc_bulk_operation_t *bulk;
bson_error_t error;
bson_t *doc;
bson_t reply;
char *str;
bool ret;
bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL);
/* Allow this document to bypass document validation.
* NOTE: When authentication is enabled, the authenticated user must have
* either the "dbadmin" or "restore" roles to bypass document validation */
mongoc_bulk_operation_set_bypass_document_validation (bulk, true);
/* Two inserts */
doc = BCON_NEW ("_id", BCON_INT32 (31));
mongoc_bulk_operation_insert (bulk, doc);
bson_destroy (doc);
doc = BCON_NEW ("_id", BCON_INT32 (32));
mongoc_bulk_operation_insert (bulk, doc);
bson_destroy (doc);
ret = mongoc_bulk_operation_execute (bulk, &reply, &error);
str = bson_as_canonical_extended_json (&reply, NULL);
printf ("%s\n", str);
bson_free (str);
if (!ret) {
printf ("Error: %s\n", error.message);
}
bson_destroy (&reply);
mongoc_bulk_operation_destroy (bulk); } int main (void) {
bson_t *options;
bson_error_t error;
mongoc_client_t *client;
mongoc_collection_t *collection;
mongoc_database_t *database;
const char *uri_string = "mongodb://localhost/?appname=bulk5-example";
mongoc_uri_t *uri;
mongoc_init ();
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
mongoc_client_set_error_api (client, 2);
database = mongoc_client_get_database (client, "testasdf");
/* Create schema validator */
options = BCON_NEW (
"validator", "{", "number", "{", "$gte", BCON_INT32 (5), "}", "}");
collection =
mongoc_database_create_collection (database, "collname", options, &error);
if (collection) {
bulk5_fail (collection);
bulk5_success (collection);
mongoc_collection_destroy (collection);
} else {
fprintf (stderr, "Couldn't create collection: '%s'\n", error.message);
}
bson_free (options);
mongoc_uri_destroy (uri);
mongoc_database_destroy (database);
mongoc_client_destroy (client);
mongoc_cleanup ();
return EXIT_SUCCESS; }


Running the above example will result in:

{ "nInserted" : 0,

"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"nUpserted" : 0,
"writeErrors" : [
{ "index" : 0,
"code" : 121,
"errmsg" : "Document failed validation" } ] } Error: Document failed validation { "nInserted" : 2,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"nUpserted" : 0,
"writeErrors" : [] }


The bson_error_t domain is MONGOC_ERROR_COMMAND.

Bulk Operation Write Concerns

By default bulk operations are executed with the write_concern of the collection they are executed against. A custom write concern can be passed to the mongoc_collection_create_bulk_operation_with_opts() method. Write concern errors (e.g. wtimeout) will be reported after all operations are attempted, regardless of execution order.

bulk4.c

#include <assert.h>
#include <mongoc/mongoc.h>
#include <stdio.h>
static void
bulk4 (mongoc_collection_t *collection)
{

bson_t opts = BSON_INITIALIZER;
mongoc_write_concern_t *wc;
mongoc_bulk_operation_t *bulk;
bson_error_t error;
bson_t *doc;
bson_t reply;
char *str;
bool ret;
wc = mongoc_write_concern_new ();
mongoc_write_concern_set_w (wc, 4);
mongoc_write_concern_set_wtimeout_int64 (wc, 100); /* milliseconds */
mongoc_write_concern_append (wc, &opts);
bulk = mongoc_collection_create_bulk_operation_with_opts (collection, &opts);
/* Two inserts */
doc = BCON_NEW ("_id", BCON_INT32 (10));
mongoc_bulk_operation_insert (bulk, doc);
bson_destroy (doc);
doc = BCON_NEW ("_id", BCON_INT32 (11));
mongoc_bulk_operation_insert (bulk, doc);
bson_destroy (doc);
ret = mongoc_bulk_operation_execute (bulk, &reply, &error);
str = bson_as_canonical_extended_json (&reply, NULL);
printf ("%s\n", str);
bson_free (str);
if (!ret) {
printf ("Error: %s\n", error.message);
}
bson_destroy (&reply);
mongoc_bulk_operation_destroy (bulk);
mongoc_write_concern_destroy (wc);
bson_destroy (&opts); } int main (void) {
mongoc_client_t *client;
mongoc_collection_t *collection;
const char *uri_string = "mongodb://localhost/?appname=bulk4-example";
mongoc_uri_t *uri;
bson_error_t error;
mongoc_init ();
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
mongoc_client_set_error_api (client, 2);
collection = mongoc_client_get_collection (client, "test", "test");
bulk4 (collection);
mongoc_uri_destroy (uri);
mongoc_collection_destroy (collection);
mongoc_client_destroy (client);
mongoc_cleanup ();
return EXIT_SUCCESS; }


Example reply document and error message:

{ "nInserted"    : 2,

"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"nUpserted" : 0,
"writeErrors" : [],
"writeConcernErrors" : [
{ "code" : 64,
"errmsg" : "waiting for replication timed out" } ] } Error: waiting for replication timed out


The bson_error_t domain is MONGOC_ERROR_WRITE_CONCERN if there are write concern errors and no write errors. Write errors indicate failed operations, so they take precedence over write concern errors, which mean merely that the write concern is not satisfied yet.

Setting Collation Order

This feature is only available when using MongoDB 3.4 and later.

bulk-collation.c

#include <mongoc/mongoc.h>
#include <stdio.h>
static void
bulk_collation (mongoc_collection_t *collection)
{

mongoc_bulk_operation_t *bulk;
bson_t *opts;
bson_t *doc;
bson_t *selector;
bson_t *update;
bson_error_t error;
bson_t reply;
char *str;
uint32_t ret;
/* insert {_id: "one"} and {_id: "One"} */
bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL);
doc = BCON_NEW ("_id", BCON_UTF8 ("one"));
mongoc_bulk_operation_insert (bulk, doc);
bson_destroy (doc);
doc = BCON_NEW ("_id", BCON_UTF8 ("One"));
mongoc_bulk_operation_insert (bulk, doc);
bson_destroy (doc);
/* "One" normally sorts before "one"; make "one" come first */
opts = BCON_NEW ("collation",
"{",
"locale",
BCON_UTF8 ("en_US"),
"caseFirst",
BCON_UTF8 ("lower"),
"}");
/* set x=1 on the document with _id "One", which now sorts after "one" */
update = BCON_NEW ("$set", "{", "x", BCON_INT64 (1), "}");
selector = BCON_NEW ("_id", "{", "$gt", BCON_UTF8 ("one"), "}");
mongoc_bulk_operation_update_one_with_opts (
bulk, selector, update, opts, &error);
ret = mongoc_bulk_operation_execute (bulk, &reply, &error);
str = bson_as_canonical_extended_json (&reply, NULL);
printf ("%s\n", str);
bson_free (str);
if (!ret) {
printf ("Error: %s\n", error.message);
}
bson_destroy (&reply);
bson_destroy (update);
bson_destroy (selector);
bson_destroy (opts);
mongoc_bulk_operation_destroy (bulk); } int main (void) {
mongoc_client_t *client;
mongoc_collection_t *collection;
const char *uri_string = "mongodb://localhost/?appname=bulk-collation";
mongoc_uri_t *uri;
bson_error_t error;
mongoc_init ();
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
mongoc_client_set_error_api (client, 2);
collection = mongoc_client_get_collection (client, "db", "collection");
bulk_collation (collection);
mongoc_uri_destroy (uri);
mongoc_collection_destroy (collection);
mongoc_client_destroy (client);
mongoc_cleanup ();
return EXIT_SUCCESS; }


Running the above example will result in:

{ "nInserted" : 2,

"nMatched" : 1,
"nModified" : 1,
"nRemoved" : 0,
"nUpserted" : 0,
"writeErrors" : [ ] }


Unacknowledged Bulk Writes

Set "w" to zero for an unacknowledged write. The driver sends unacknowledged writes using the legacy opcodes OP_INSERT, OP_UPDATE, and OP_DELETE.

bulk6.c

#include <mongoc/mongoc.h>
#include <stdio.h>
static void
bulk6 (mongoc_collection_t *collection)
{

bson_t opts = BSON_INITIALIZER;
mongoc_write_concern_t *wc;
mongoc_bulk_operation_t *bulk;
bson_error_t error;
bson_t *doc;
bson_t *selector;
bson_t reply;
char *str;
bool ret;
wc = mongoc_write_concern_new ();
mongoc_write_concern_set_w (wc, 0);
mongoc_write_concern_append (wc, &opts);
bulk = mongoc_collection_create_bulk_operation_with_opts (collection, &opts);
doc = BCON_NEW ("_id", BCON_INT32 (10));
mongoc_bulk_operation_insert (bulk, doc);
bson_destroy (doc);
selector = BCON_NEW ("_id", BCON_INT32 (11));
mongoc_bulk_operation_remove_one (bulk, selector);
bson_destroy (selector);
ret = mongoc_bulk_operation_execute (bulk, &reply, &error);
str = bson_as_canonical_extended_json (&reply, NULL);
printf ("%s\n", str);
bson_free (str);
if (!ret) {
printf ("Error: %s\n", error.message);
}
bson_destroy (&reply);
mongoc_bulk_operation_destroy (bulk);
mongoc_write_concern_destroy (wc);
bson_destroy (&opts); } int main (void) {
mongoc_client_t *client;
mongoc_collection_t *collection;
const char *uri_string = "mongodb://localhost/?appname=bulk6-example";
mongoc_uri_t *uri;
bson_error_t error;
mongoc_init ();
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
mongoc_client_set_error_api (client, 2);
collection = mongoc_client_get_collection (client, "test", "test");
bulk6 (collection);
mongoc_uri_destroy (uri);
mongoc_collection_destroy (collection);
mongoc_client_destroy (client);
mongoc_cleanup ();
return EXIT_SUCCESS; }


The reply document is empty:

{ }


Further Reading

See the Driver Bulk API Spec, which describes bulk write operations for all MongoDB drivers.

Aggregation Framework Examples

This document provides a number of practical examples that display the capabilities of the aggregation framework.

The Aggregations using the Zip Codes Data Set examples uses a publicly available data set of all zipcodes and populations in the United States. These data are available at: zips.json.

Requirements

Let's check if everything is installed.

Use the following command to load zips.json data set into mongod instance:

$ mongoimport --drop -d test -c zipcodes zips.json


Let's use the MongoDB shell to verify that everything was imported successfully.

$ mongo test
connecting to: test
> db.zipcodes.count()
29467
> db.zipcodes.findOne()
{

"_id" : "35004",
"city" : "ACMAR",
"loc" : [
-86.51557,
33.584132
],
"pop" : 6055,
"state" : "AL" }


Aggregations using the Zip Codes Data Set

Each document in this collection has the following form:

{

"_id" : "35004",
"city" : "Acmar",
"state" : "AL",
"pop" : 6055,
"loc" : [-86.51557, 33.584132] }


In these documents:

  • The _id field holds the zipcode as a string.
  • The city field holds the city name.
  • The state field holds the two letter state abbreviation.
  • The pop field holds the population.
  • The loc field holds the location as a [latitude, longitude] array.

States with Populations Over 10 Million

To get all states with a population greater than 10 million, use the following aggregation pipeline:

aggregation1.c

#include <mongoc/mongoc.h>
#include <stdio.h>
static void
print_pipeline (mongoc_collection_t *collection)
{

mongoc_cursor_t *cursor;
bson_error_t error;
const bson_t *doc;
bson_t *pipeline;
char *str;
pipeline = BCON_NEW ("pipeline",
"[",
"{",
"$group",
"{",
"_id",
"$state",
"total_pop",
"{",
"$sum",
"$pop",
"}",
"}",
"}",
"{",
"$match",
"{",
"total_pop",
"{",
"$gte",
BCON_INT32 (10000000),
"}",
"}",
"}",
"]");
cursor = mongoc_collection_aggregate (
collection, MONGOC_QUERY_NONE, pipeline, NULL, NULL);
while (mongoc_cursor_next (cursor, &doc)) {
str = bson_as_canonical_extended_json (doc, NULL);
printf ("%s\n", str);
bson_free (str);
}
if (mongoc_cursor_error (cursor, &error)) {
fprintf (stderr, "Cursor Failure: %s\n", error.message);
}
mongoc_cursor_destroy (cursor);
bson_destroy (pipeline); } int main (void) {
mongoc_client_t *client;
mongoc_collection_t *collection;
const char *uri_string =
"mongodb://localhost:27017/?appname=aggregation-example";
mongoc_uri_t *uri;
bson_error_t error;
mongoc_init ();
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
mongoc_client_set_error_api (client, 2);
collection = mongoc_client_get_collection (client, "test", "zipcodes");
print_pipeline (collection);
mongoc_uri_destroy (uri);
mongoc_collection_destroy (collection);
mongoc_client_destroy (client);
mongoc_cleanup ();
return EXIT_SUCCESS; }


You should see a result like the following:

{ "_id" : "PA", "total_pop" : 11881643 }
{ "_id" : "OH", "total_pop" : 10847115 }
{ "_id" : "NY", "total_pop" : 17990455 }
{ "_id" : "FL", "total_pop" : 12937284 }
{ "_id" : "TX", "total_pop" : 16986510 }
{ "_id" : "IL", "total_pop" : 11430472 }
{ "_id" : "CA", "total_pop" : 29760021 }


The above aggregation pipeline is build from two pipeline operators: $group and $match.

The $group pipeline operator requires _id field where we specify grouping; remaining fields specify how to generate composite value and must use one of the group aggregation functions: $addToSet, $first, $last, $max, $min, $avg, $push, $sum. The $match pipeline operator syntax is the same as the read operation query syntax.

The $group process reads all documents and for each state it creates a separate document, for example:

{ "_id" : "WA", "total_pop" : 4866692 }


The total_pop field uses the $sum aggregation function to sum the values of all pop fields in the source documents.

Documents created by $group are piped to the $match pipeline operator. It returns the documents with the value of total_pop field greater than or equal to 10 million.

Average City Population by State

To get the first three states with the greatest average population per city, use the following aggregation:

pipeline = BCON_NEW ("pipeline", "[",

"{", "$group", "{", "_id", "{", "state", "$state", "city", "$city", "}", "pop", "{", "$sum", "$pop", "}", "}", "}",
"{", "$group", "{", "_id", "$_id.state", "avg_city_pop", "{", "$avg", "$pop", "}", "}", "}",
"{", "$sort", "{", "avg_city_pop", BCON_INT32 (-1), "}", "}",
"{", "$limit", BCON_INT32 (3) "}", "]");


This aggregate pipeline produces:

{ "_id" : "DC", "avg_city_pop" : 303450.0 }
{ "_id" : "FL", "avg_city_pop" : 27942.29805615551 }
{ "_id" : "CA", "avg_city_pop" : 27735.341099720412 }


The above aggregation pipeline is build from three pipeline operators: $group, $sort and $limit.

The first $group operator creates the following documents:

{ "_id" : { "state" : "WY", "city" : "Smoot" }, "pop" : 414 }


Note, that the $group operator can't use nested documents except the _id field.

The second $group uses these documents to create the following documents:

{ "_id" : "FL", "avg_city_pop" : 27942.29805615551 }


These documents are sorted by the avg_city_pop field in descending order. Finally, the $limit pipeline operator returns the first 3 documents from the sorted set.

distinct and mapReduce

This document provides some practical, simple, examples to demonstrate the distinct and mapReduce commands.

Setup

First we'll write some code to insert sample data:

doc-common-insert.c

/* Don't try to compile this file on its own. It's meant to be #included

by example code */ /* Insert some sample data */ bool insert_data (mongoc_collection_t *collection) {
mongoc_bulk_operation_t *bulk;
enum N { ndocs = 4 };
bson_t *docs[ndocs];
bson_error_t error;
int i = 0;
bool ret;
bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL);
docs[0] = BCON_NEW ("x", BCON_DOUBLE (1.0), "tags", "[", "dog", "cat", "]");
docs[1] = BCON_NEW ("x", BCON_DOUBLE (2.0), "tags", "[", "cat", "]");
docs[2] = BCON_NEW (
"x", BCON_DOUBLE (2.0), "tags", "[", "mouse", "cat", "dog", "]");
docs[3] = BCON_NEW ("x", BCON_DOUBLE (3.0), "tags", "[", "]");
for (i = 0; i < ndocs; i++) {
mongoc_bulk_operation_insert (bulk, docs[i]);
bson_destroy (docs[i]);
docs[i] = NULL;
}
ret = mongoc_bulk_operation_execute (bulk, NULL, &error);
if (!ret) {
fprintf (stderr, "Error inserting data: %s\n", error.message);
}
mongoc_bulk_operation_destroy (bulk);
return ret; } /* A helper which we'll use a lot later on */ void print_res (const bson_t *reply) {
char *str;
BSON_ASSERT (reply);
str = bson_as_canonical_extended_json (reply, NULL);
printf ("%s\n", str);
bson_free (str); }


distinct command

This is how to use the distinct command to get the distinct values of x which are greater than 1:

distinct.c

bool
distinct (mongoc_database_t *database)
{

bson_t *command;
bson_t reply;
bson_error_t error;
bool res;
bson_iter_t iter;
bson_iter_t array_iter;
double val;
command = BCON_NEW ("distinct",
BCON_UTF8 (COLLECTION_NAME),
"key",
BCON_UTF8 ("x"),
"query",
"{",
"x",
"{",
"$gt",
BCON_DOUBLE (1.0),
"}",
"}");
res =
mongoc_database_command_simple (database, command, NULL, &reply, &error);
if (!res) {
fprintf (stderr, "Error with distinct: %s\n", error.message);
goto cleanup;
}
/* Do something with reply (in this case iterate through the values) */
if (!(bson_iter_init_find (&iter, &reply, "values") &&
BSON_ITER_HOLDS_ARRAY (&iter) &&
bson_iter_recurse (&iter, &array_iter))) {
fprintf (stderr, "Couldn't extract \"values\" field from response\n");
goto cleanup;
}
while (bson_iter_next (&array_iter)) {
if (BSON_ITER_HOLDS_DOUBLE (&array_iter)) {
val = bson_iter_double (&array_iter);
printf ("Next double: %f\n", val);
}
} cleanup:
/* cleanup */
bson_destroy (command);
bson_destroy (&reply);
return res; }


mapReduce - basic example

A simple example using the map reduce framework. It simply adds up the number of occurrences of each "tag".

First define the map and reduce functions:

constants.c

const char *const COLLECTION_NAME = "things";
/* Our map function just emits a single (key, 1) pair for each tag

in the array: */ const char *const MAPPER = "function () {"
"this.tags.forEach(function(z) {"
"emit(z, 1);"
"});"
"}"; /* The reduce function sums over all of the emitted values for a
given key: */ const char *const REDUCER = "function (key, values) {"
"var total = 0;"
"for (var i = 0; i < values.length; i++) {"
"total += values[i];"
"}"
"return total;"
"}"; /* Note We can't just return values.length as the reduce function
might be called iteratively on the results of other reduce
steps. */


Run the mapReduce command. Use the generic command helpers (e.g. mongoc_database_command_simple()). Do not the read command helpers (e.g. mongoc_database_read_command_with_opts()) because they are considered retryable read operations. If retryable reads are enabled, those operations will retry once on a retryable error, giving undesirable behavior for mapReduce.

map-reduce-basic.c

bool
map_reduce_basic (mongoc_database_t *database)
{

bson_t reply;
bson_t *command;
bool res;
bson_error_t error;
mongoc_cursor_t *cursor;
const bson_t *doc;
bool query_done = false;
const char *out_collection_name = "outCollection";
mongoc_collection_t *out_collection;
/* Empty find query */
bson_t find_query = BSON_INITIALIZER;
/* Construct the mapReduce command */
/* Other arguments can also be specified here, like "query" or
"limit" and so on */
command = BCON_NEW ("mapReduce",
BCON_UTF8 (COLLECTION_NAME),
"map",
BCON_CODE (MAPPER),
"reduce",
BCON_CODE (REDUCER),
"out",
BCON_UTF8 (out_collection_name));
res =
mongoc_database_command_simple (database, command, NULL, &reply, &error);
if (!res) {
fprintf (stderr, "MapReduce failed: %s\n", error.message);
goto cleanup;
}
/* Do something with the reply (it doesn't contain the mapReduce results) */
print_res (&reply);
/* Now we'll query outCollection to see what the results are */
out_collection =
mongoc_database_get_collection (database, out_collection_name);
cursor = mongoc_collection_find_with_opts (
out_collection, &find_query, NULL, NULL);
query_done = true;
/* Do something with the results */
while (mongoc_cursor_next (cursor, &doc)) {
print_res (doc);
}
if (mongoc_cursor_error (cursor, &error)) {
fprintf (stderr, "ERROR: %s\n", error.message);
res = false;
goto cleanup;
} cleanup:
/* cleanup */
if (query_done) {
mongoc_cursor_destroy (cursor);
mongoc_collection_destroy (out_collection);
}
bson_destroy (&reply);
bson_destroy (command);
return res; }


mapReduce - more complicated example

You must have replica set running for this.

In this example we contact a secondary in the replica set and do an "inline" map reduce, so the results are returned immediately:

map-reduce-advanced.c

bool
map_reduce_advanced (mongoc_database_t *database)
{

bson_t *command;
bson_error_t error;
bool res = true;
mongoc_cursor_t *cursor;
mongoc_read_prefs_t *read_pref;
const bson_t *doc;
/* Construct the mapReduce command */
/* Other arguments can also be specified here, like "query" or "limit"
and so on */
/* Read the results inline from a secondary replica */
command = BCON_NEW ("mapReduce",
BCON_UTF8 (COLLECTION_NAME),
"map",
BCON_CODE (MAPPER),
"reduce",
BCON_CODE (REDUCER),
"out",
"{",
"inline",
"1",
"}");
read_pref = mongoc_read_prefs_new (MONGOC_READ_SECONDARY);
cursor = mongoc_database_command (
database, MONGOC_QUERY_NONE, 0, 0, 0, command, NULL, read_pref);
/* Do something with the results */
while (mongoc_cursor_next (cursor, &doc)) {
print_res (doc);
}
if (mongoc_cursor_error (cursor, &error)) {
fprintf (stderr, "ERROR: %s\n", error.message);
res = false;
}
mongoc_cursor_destroy (cursor);
mongoc_read_prefs_destroy (read_pref);
bson_destroy (command);
return res; }


Running the Examples

Here's how to run the example code

basic-aggregation.c

/*

* Copyright 2016 MongoDB, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/ #include <mongoc/mongoc.h> #include <stdio.h> #include "constants.c" #include "../doc-common-insert.c" #include "distinct.c" #include "map-reduce-basic.c" #include "map-reduce-advanced.c" int main (int argc, char *argv[]) {
mongoc_database_t *database = NULL;
mongoc_client_t *client = NULL;
mongoc_collection_t *collection = NULL;
mongoc_uri_t *uri = NULL;
bson_error_t error;
char *host_and_port = NULL;
int exit_code = EXIT_FAILURE;
if (argc != 2) {
fprintf (stderr, "usage: %s CONNECTION-STRING\n", argv[0]);
fprintf (stderr,
"the connection string can be of the following forms:\n");
fprintf (stderr, "localhost\t\t\t\tlocal machine\n");
fprintf (stderr, "localhost:27018\t\t\t\tlocal machine on port 27018\n");
fprintf (stderr,
"mongodb://user:pass@localhost:27017\t"
"local machine on port 27017, and authenticate with username "
"user and password pass\n");
return exit_code;
}
mongoc_init ();
if (strncmp (argv[1], "mongodb://", 10) == 0) {
host_and_port = bson_strdup (argv[1]);
} else {
host_and_port = bson_strdup_printf ("mongodb://%s", argv[1]);
}
uri = mongoc_uri_new_with_error (host_and_port, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
host_and_port,
error.message);
goto cleanup;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
goto cleanup;
}
mongoc_client_set_error_api (client, 2);
database = mongoc_client_get_database (client, "test");
collection = mongoc_database_get_collection (database, COLLECTION_NAME);
printf ("Inserting data\n");
if (!insert_data (collection)) {
goto cleanup;
}
printf ("distinct\n");
if (!distinct (database)) {
goto cleanup;
}
printf ("map reduce\n");
if (!map_reduce_basic (database)) {
goto cleanup;
}
printf ("more complicated map reduce\n");
if (!map_reduce_advanced (database)) {
goto cleanup;
}

exit_code = EXIT_SUCCESS; cleanup:
if (collection) {
mongoc_collection_destroy (collection);
}
if (database) {
mongoc_database_destroy (database);
}
if (client) {
mongoc_client_destroy (client);
}
if (uri) {
mongoc_uri_destroy (uri);
}
if (host_and_port) {
bson_free (host_and_port);
}
mongoc_cleanup ();
return exit_code; }


If you want to try the advanced map reduce example with a secondary, start a replica set (instructions for how to do this can be found here).

Otherwise, just start an instance of MongoDB:

$ mongod


Now compile and run the example program:

$ cd examples/basic_aggregation/
$ gcc -Wall -o agg-example basic-aggregation.c $(pkg-config --cflags --libs libmongoc-1.0)
$ ./agg-example localhost
Inserting data
distinct
Next double: 2.000000
Next double: 3.000000
map reduce
{ "result" : "outCollection", "timeMillis" : 155, "counts" : { "input" : 84, "emit" : 126, "reduce" : 3, "output" : 3 }, "ok" : 1 }
{ "_id" : "cat", "value" : 63 }
{ "_id" : "dog", "value" : 42 }
{ "_id" : "mouse", "value" : 21 }
more complicated map reduce
{ "results" : [ { "_id" : "cat", "value" : 63 }, { "_id" : "dog", "value" : 42 }, { "_id" : "mouse", "value" : 21 } ], "timeMillis" : 14, "counts" : { "input" : 84, "emit" : 126, "reduce" : 3, "output" : 3 }, "ok" : 1 }


Using libmongoc in a Microsoft Visual Studio project

Download and install libmongoc on your system, then open Visual Studio, select "File→New→Project...", and create a new Win32 Console Application. [image]

Remember to switch the platform from 32-bit to 64-bit: [image]

Right-click on your console application in the Solution Explorer and select "Properties". Choose to edit properties for "All Configurations", expand the "C/C++" options and choose "General". Add to the "Additional Include Directories" these paths:

C:\mongo-c-driver\include\libbson-1.0
C:\mongo-c-driver\include\libmongoc-1.0


[image]

(If you chose a different CMAKE_INSTALL_PREFIX when you ran CMake, your include paths will be different.)

Also in the Properties dialog, expand the "Linker" options and choose "Input", and add to the "Additional Dependencies" these libraries:

C:\mongo-c-driver\lib\bson-1.0.lib
C:\mongo-c-driver\lib\mongoc-1.0.lib


[image]

Adding these libraries as dependencies provides linker symbols to build your application, but to actually run it, libbson's and libmongoc's DLLs must be in your executable path. Select "Debugging" in the Properties dialog, and set the "Environment" option to:

PATH=c:/mongo-c-driver/bin


[image]

Finally, include "mongoc/mongoc.h" in your project's "stdafx.h":

#include <mongoc/mongoc.h>


Static linking

Following the instructions above, you have dynamically linked your application to the libbson and libmongoc DLLs. This is usually the right choice. If you want to link statically instead, update your "Additional Dependencies" list by removing bson-1.0.lib and mongoc-1.0.lib and replacing them with these libraries:

C:\mongo-c-driver\lib\bson-static-1.0.lib
C:\mongo-c-driver\lib\mongoc-static-1.0.lib
ws2_32.lib
Secur32.lib
Crypt32.lib
BCrypt.lib


[image]

(To explain the purpose of each library: bson-static-1.0.lib and mongoc-static-1.0.lib are static archives of the driver code. The socket library ws2_32 is required by libbson, which uses the socket routine gethostname to help guarantee ObjectId uniqueness. The BCrypt library is used by libmongoc for TLS connections to MongoDB, and Secur32 and Crypt32 are required for enterprise authentication methods like Kerberos.)

Finally, define two preprocessor symbols before including mongoc/mongoc.h in your stdafx.h:

#define BSON_STATIC
#define MONGOC_STATIC
#include <mongoc/mongoc.h>


Making these changes to your project is only required for static linking; for most people, the dynamic-linking instructions above are preferred.

Next Steps

Now you can build and debug applications in Visual Studio that use libbson and libmongoc. Proceed to Making a Connection in the tutorial to learn how connect to MongoDB and perform operations.

Creating Indexes

To create indexes on a MongoDB collection, execute the createIndexes command with a command function like mongoc_database_write_command_with_opts() or mongoc_collection_write_command_with_opts(). See the MongoDB Manual entry for the createIndexes command for details.

WARNING:

The commitQuorum option to the createIndexes command is only supported in MongoDB 4.4+ servers, but it is not validated in the command functions. Do not pass commitQuorum if connected to server versions less than 4.4. Using the commitQuorum option on server versions less than 4.4 may have adverse effects on index builds.


Example

example-create-indexes.c

/* gcc example-create-indexes.c -o example-create-indexes $(pkg-config --cflags

* --libs libmongoc-1.0) */ /* ./example-create-indexes [CONNECTION_STRING [COLLECTION_NAME]] */ #include <mongoc/mongoc.h> #include <stdio.h> #include <stdlib.h> int main (int argc, char *argv[]) {
mongoc_client_t *client;
const char *uri_string =
"mongodb://127.0.0.1/?appname=create-indexes-example";
mongoc_uri_t *uri;
mongoc_database_t *db;
const char *collection_name = "test";
bson_t keys;
char *index_name;
bson_t *create_indexes;
bson_t reply;
char *reply_str;
bson_error_t error;
bool r;
mongoc_init ();
if (argc > 1) {
uri_string = argv[1];
}
if (argc > 2) {
collection_name = argv[2];
}
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
mongoc_client_set_error_api (client, 2);
db = mongoc_client_get_database (client, "test");
/* ascending index on field "x" */
bson_init (&keys);
BSON_APPEND_INT32 (&keys, "x", 1);
index_name = mongoc_collection_keys_to_index_string (&keys);
create_indexes = BCON_NEW ("createIndexes",
BCON_UTF8 (collection_name),
"indexes",
"[",
"{",
"key",
BCON_DOCUMENT (&keys),
"name",
BCON_UTF8 (index_name),
"}",
"]");
r = mongoc_database_write_command_with_opts (
db, create_indexes, NULL /* opts */, &reply, &error);
reply_str = bson_as_json (&reply, NULL);
printf ("%s\n", reply_str);
if (!r) {
fprintf (stderr, "Error in createIndexes: %s\n", error.message);
}
bson_free (index_name);
bson_free (reply_str);
bson_destroy (&reply);
bson_destroy (create_indexes);
mongoc_database_destroy (db);
mongoc_uri_destroy (uri);
mongoc_client_destroy (client);
mongoc_cleanup ();
return r ? EXIT_SUCCESS : EXIT_FAILURE; }


Aids for Debugging

GDB

This repository contains a .gdbinit file that contains helper functions to aid debugging of data structures. GDB will load this file automatically if you have added the directory which contains the .gdbinit file to GDB's auto-load safe-path, and you start GDB from the directory which holds the .gdbinit file.

You can see the safe-path with show auto-load safe-path on a GDB prompt. You can configure it by setting it in ~/.gdbinit with:

add-auto-load-safe-path /path/to/mongo-c-driver


If you haven't added the path to your auto-load safe-path, or start GDB in another directory, load the file with:

source path/to/mongo-c-driver/.gdbinit


The .gdbinit file defines the printbson function, which shows the contents of a bson_t * variable. If you have a local bson_t, then you must prefix the variable with a &.

An example GDB session looks like:

(gdb) printbson bson
ALLOC [0x555556cd7310 + 0] (len=475)
{

'bool' : true,
'int32' : NumberInt("42"),
'int64' : NumberLong("3000000042"),
'string' : "Stŕìñg",
'objectId' : ObjectID("5A1442F3122D331C3C6757E1"),
'utcDateTime' : UTCDateTime(1511277299031),
'arrayOfInts' : [
'0' : NumberInt("1"),
'1' : NumberInt("2")
],
'embeddedDocument' : {
'arrayOfStrings' : [
'0' : "one",
'1' : "two"
],
'double' : 2.718280,
'notherDoc' : {
'true' : NumberInt("1"),
'false' : false
}
},
'binary' : Binary("02", "3031343532333637"),
'regex' : Regex("@[a-z]+@", "im"),
'null' : null,
'js' : JavaScript("print foo"),
'jsws' : JavaScript("print foo") with scope: {
'f' : NumberInt("42"),
'a' : [
'0' : 3.141593,
'1' : 2.718282
]
},
'timestamp' : Timestamp(4294967295, 4294967295),
'double' : 3.141593 }


LLDB

This repository also includes a script that customizes LLDB's standard print command to print a bson_t or bson_t * as JSON:

(lldb) print b
(bson_t) $0 = {"x": 1, "y": 2}


The custom bson command provides more options:

(lldb) bson --verbose b
len=19
flags=INLINE|STATIC
{

"x": 1,
"y": 2 } (lldb) bson --raw b '\x13\x00\x00\x00\x10x\x00\x01\x00\x00\x00\x10y\x00\x02\x00\x00\x00\x00'


Type help bson for a list of options.

The script requires a build of libbson with debug symbols, and an installation of PyMongo. Install PyMongo with:

python -m pip install pymongo


If you see "No module named pip" then you must install pip, then run the previous command again.

Create a file ~/.lldbinit containing:

command script import /path/to/mongo-c-driver/lldb_bson.py


If you see "bson command installed by lldb_bson" at the beginning of your LLDB session, you've installed the script correctly.

Debug assertions

To enable runtime debug assertions, configure with -DENABLE_DEBUG_ASSERTIONS=ON.

Using Client-Side Field Level Encryption

New in MongoDB 4.2, Client-Side Field Level Encryption (also referred to as Client-Side Encryption) allows administrators and developers to encrypt specific data fields in addition to other MongoDB encryption features.

With Client-Side Encryption, developers can encrypt fields client side without any server-side configuration or directives. Client-Side Encryption supports workloads where applications must guarantee that unauthorized parties, including server administrators, cannot read the encrypted data.

Automatic encryption, where sensitive fields in commands are encrypted automatically, requires an Enterprise-only process to do query analysis.

Installation

libmongocrypt

There is a separate library, libmongocrypt, that must be installed prior to configuring libmongoc to enable Client-Side Encryption.

libmongocrypt depends on libbson. To build libmongoc with Client-Side Encryption support you must:

1.
Install libbson
2.
Build and install libmongocrypt
3.
Build libmongoc

To install libbson, follow the instructions to install with a package manager: Install libbson with a Package Manager or build from source with cmake (disable building libmongoc with -DENABLE_MONGOC=OFF):

$ cd mongo-c-driver
$ mkdir cmake-build && cd cmake-build
$ cmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF -DENABLE_MONGOC=OFF ..
$ cmake --build . --target install


To build and install libmongocrypt, clone the repository and configure as follows:

$ cd libmongocrypt
$ mkdir cmake-build && cd cmake-build
$ cmake -DENABLE_SHARED_BSON=ON ..
$ cmake --build . --target install


Then, you should be able to build libmongoc with Client-Side Encryption.

$ cd mongo-c-driver
$ mkdir cmake-build && cd cmake-build
$ cmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF -DENABLE_MONGOC=ON -DENABLE_CLIENT_SIDE_ENCRYPTION=ON ..
$ cmake --build . --target install


mongocryptd

The mongocryptd binary is required for automatic Client-Side Encryption and is included as a component in the MongoDB Enterprise Server package. For detailed installation instructions see the MongoDB documentation on mongocryptd.

mongocryptd performs the following:

  • Parses the automatic encryption rules specified to the database connection. If the JSON schema contains invalid automatic encryption syntax or any document validation syntax, mongocryptd returns an error.
  • Uses the specified automatic encryption rules to mark fields in read and write operations for encryption.
  • Rejects read/write operations that may return unexpected or incorrect results when applied to an encrypted field. For supported and unsupported operations, see Read/Write Support with Automatic Field Level Encryption.

A mongoc_client_t configured with auto encryption will automatically spawn the mongocryptd process from the application's PATH. Applications can control the spawning behavior as part of the automatic encryption options. For example, to set a custom path to the mongocryptd process, set the mongocryptdSpawnPath with mongoc_auto_encryption_opts_set_extra().

bson_t *extra = BCON_NEW ("mongocryptdSpawnPath", "/path/to/mongocryptd");
mongoc_auto_encryption_opts_set_extra (opts, extra);


To control the logging output of mongocryptd pass mongocryptdSpawnArgs to mongoc_auto_encryption_opts_set_extra():

bson_t *extra = BCON_NEW ("mongocryptdSpawnArgs",

"[", "--logpath=/path/to/mongocryptd.log", "--logappend", "]"); mongoc_auto_encryption_opts_set_extra (opts, extra);


If your application wishes to manage the mongocryptd process manually, it is possible to disable spawning mongocryptd:

bson_t *extra = BCON_NEW ("mongocryptdBypassSpawn",

BCON_BOOL(true), "mongocryptdURI", "mongodb://localhost:27020"); mongoc_auto_encryption_opts_set_extra (opts, extra);


mongocryptd is only responsible for supporting automatic Client-Side Encryption in the driver and does not itself perform any encryption or decryption.

Automatic Client-Side Field Level Encryption

Automatic Client-Side Encryption is enabled by calling mongoc_client_enable_auto_encryption() on a mongoc_client_t. The following examples show how to set up automatic client-side field level encryption using mongoc_client_encryption_t to create a new encryption data key.

NOTE:

Automatic client-side field level encryption requires MongoDB 4.2 enterprise or a MongoDB 4.2 Atlas cluster. The community version of the server supports automatic decryption as well as Explicit Encryption.


Providing Local Automatic Encryption Rules

The following example shows how to specify automatic encryption rules using a schema map set with mongoc_auto_encryption_opts_set_schema_map(). The automatic encryption rules are expressed using a strict subset of the JSON Schema syntax.

Supplying a schema map provides more security than relying on JSON Schemas obtained from the server. It protects against a malicious server advertising a false JSON Schema, which could trick the client into sending unencrypted data that should be encrypted.

JSON Schemas supplied in the schema map only apply to configuring automatic client-side field level encryption. Other validation rules in the JSON schema will not be enforced by the driver and will result in an error:

client-side-encryption-schema-map.c

#include <mongoc/mongoc.h>
#include <stdio.h>
#include <stdlib.h>
#include "client-side-encryption-helpers.h"
/* Helper method to create a new data key in the key vault, a schema to use that

* key, and writes the schema to a file for later use. */ static bool create_schema_file (bson_t *kms_providers,
const char *keyvault_db,
const char *keyvault_coll,
mongoc_client_t *keyvault_client,
bson_error_t *error) {
mongoc_client_encryption_t *client_encryption = NULL;
mongoc_client_encryption_opts_t *client_encryption_opts = NULL;
mongoc_client_encryption_datakey_opts_t *datakey_opts = NULL;
bson_value_t datakey_id = {0};
char *keyaltnames[] = {"mongoc_encryption_example_1"};
bson_t *schema = NULL;
char *schema_string = NULL;
size_t schema_string_len;
FILE *outfile = NULL;
bool ret = false;
client_encryption_opts = mongoc_client_encryption_opts_new ();
mongoc_client_encryption_opts_set_kms_providers (client_encryption_opts,
kms_providers);
mongoc_client_encryption_opts_set_keyvault_namespace (
client_encryption_opts, keyvault_db, keyvault_coll);
mongoc_client_encryption_opts_set_keyvault_client (client_encryption_opts,
keyvault_client);
client_encryption =
mongoc_client_encryption_new (client_encryption_opts, error);
if (!client_encryption) {
goto fail;
}
/* Create a new data key and json schema for the encryptedField.
* https://dochub.mongodb.org/core/client-side-field-level-encryption-automatic-encryption-rules
*/
datakey_opts = mongoc_client_encryption_datakey_opts_new ();
mongoc_client_encryption_datakey_opts_set_keyaltnames (
datakey_opts, keyaltnames, 1);
if (!mongoc_client_encryption_create_datakey (
client_encryption, "local", datakey_opts, &datakey_id, error)) {
goto fail;
}
/* Create a schema describing that "encryptedField" is a string encrypted
* with the newly created data key using deterministic encryption. */
schema = BCON_NEW ("properties",
"{",
"encryptedField",
"{",
"encrypt",
"{",
"keyId",
"[",
BCON_BIN (datakey_id.value.v_binary.subtype,
datakey_id.value.v_binary.data,
datakey_id.value.v_binary.data_len),
"]",
"bsonType",
"string",
"algorithm",
MONGOC_AEAD_AES_256_CBC_HMAC_SHA_512_DETERMINISTIC,
"}",
"}",
"}",
"bsonType",
"object");
/* Use canonical JSON so that other drivers and tools will be
* able to parse the MongoDB extended JSON file. */
schema_string = bson_as_canonical_extended_json (schema, &schema_string_len);
outfile = fopen ("jsonSchema.json", "w");
if (0 == fwrite (schema_string, sizeof (char), schema_string_len, outfile)) {
fprintf (stderr, "failed to write to file\n");
goto fail;
}
ret = true; fail:
mongoc_client_encryption_destroy (client_encryption);
mongoc_client_encryption_datakey_opts_destroy (datakey_opts);
mongoc_client_encryption_opts_destroy (client_encryption_opts);
bson_free (schema_string);
bson_destroy (schema);
bson_value_destroy (&datakey_id);
if (outfile) {
fclose (outfile);
}
return ret; } /* This example demonstrates how to use automatic encryption with a client-side
* schema map using the enterprise version of MongoDB */ int main (void) { /* The collection used to store the encryption data keys. */ #define KEYVAULT_DB "encryption" #define KEYVAULT_COLL "__libmongocTestKeyVault" /* The collection used to store the encrypted documents in this example. */ #define ENCRYPTED_DB "test" #define ENCRYPTED_COLL "coll"
int exit_status = EXIT_FAILURE;
bool ret;
uint8_t *local_masterkey = NULL;
uint32_t local_masterkey_len;
bson_t *kms_providers = NULL;
bson_error_t error = {0};
bson_t *index_keys = NULL;
char *index_name = NULL;
bson_t *create_index_cmd = NULL;
bson_json_reader_t *reader = NULL;
bson_t schema = BSON_INITIALIZER;
bson_t *schema_map = NULL;
/* The MongoClient used to access the key vault (keyvault_namespace). */
mongoc_client_t *keyvault_client = NULL;
mongoc_collection_t *keyvault_coll = NULL;
mongoc_auto_encryption_opts_t *auto_encryption_opts = NULL;
mongoc_client_t *client = NULL;
mongoc_collection_t *coll = NULL;
bson_t *to_insert = NULL;
mongoc_client_t *unencrypted_client = NULL;
mongoc_collection_t *unencrypted_coll = NULL;
mongoc_init ();
/* Configure the master key. This must be the same master key that was used
* to create the encryption key. */
local_masterkey =
hex_to_bin (getenv ("LOCAL_MASTERKEY"), &local_masterkey_len);
if (!local_masterkey || local_masterkey_len != 96) {
fprintf (stderr,
"Specify LOCAL_MASTERKEY environment variable as a "
"secure random 96 byte hex value.\n");
goto fail;
}
kms_providers = BCON_NEW ("local",
"{",
"key",
BCON_BIN (0, local_masterkey, local_masterkey_len),
"}");
/* Set up the key vault for this example. */
keyvault_client = mongoc_client_new (
"mongodb://localhost/?appname=client-side-encryption-keyvault");
keyvault_coll = mongoc_client_get_collection (
keyvault_client, KEYVAULT_DB, KEYVAULT_COLL);
mongoc_collection_drop (keyvault_coll, NULL);
/* Create a unique index to ensure that two data keys cannot share the same
* keyAltName. This is recommended practice for the key vault. */
index_keys = BCON_NEW ("keyAltNames", BCON_INT32 (1));
index_name = mongoc_collection_keys_to_index_string (index_keys);
create_index_cmd = BCON_NEW ("createIndexes",
KEYVAULT_COLL,
"indexes",
"[",
"{",
"key",
BCON_DOCUMENT (index_keys),
"name",
index_name,
"unique",
BCON_BOOL (true),
"partialFilterExpression",
"{",
"keyAltNames",
"{",
"$exists",
BCON_BOOL (true),
"}",
"}",
"}",
"]");
ret = mongoc_client_command_simple (keyvault_client,
KEYVAULT_DB,
create_index_cmd,
NULL /* read prefs */,
NULL /* reply */,
&error);
if (!ret) {
goto fail;
}
/* Create a new data key and a schema using it for encryption. Save the
* schema to the file jsonSchema.json */
ret = create_schema_file (
kms_providers, KEYVAULT_DB, KEYVAULT_COLL, keyvault_client, &error);
if (!ret) {
goto fail;
}
/* Load the JSON Schema and construct the local schema_map option. */
reader = bson_json_reader_new_from_file ("jsonSchema.json", &error);
if (!reader) {
goto fail;
}
bson_json_reader_read (reader, &schema, &error);
/* Construct the schema map, mapping the namespace of the collection to the
* schema describing encryption. */
schema_map =
BCON_NEW (ENCRYPTED_DB "." ENCRYPTED_COLL, BCON_DOCUMENT (&schema));
auto_encryption_opts = mongoc_auto_encryption_opts_new ();
mongoc_auto_encryption_opts_set_keyvault_client (auto_encryption_opts,
keyvault_client);
mongoc_auto_encryption_opts_set_keyvault_namespace (
auto_encryption_opts, KEYVAULT_DB, KEYVAULT_COLL);
mongoc_auto_encryption_opts_set_kms_providers (auto_encryption_opts,
kms_providers);
mongoc_auto_encryption_opts_set_schema_map (auto_encryption_opts,
schema_map);
client =
mongoc_client_new ("mongodb://localhost/?appname=client-side-encryption");
/* Enable automatic encryption. It will determine that encryption is
* necessary from the schema map instead of relying on the server to provide
* a schema. */
ret = mongoc_client_enable_auto_encryption (
client, auto_encryption_opts, &error);
if (!ret) {
goto fail;
}
coll = mongoc_client_get_collection (client, ENCRYPTED_DB, ENCRYPTED_COLL);
/* Clear old data */
mongoc_collection_drop (coll, NULL);
to_insert = BCON_NEW ("encryptedField", "123456789");
ret = mongoc_collection_insert_one (
coll, to_insert, NULL /* opts */, NULL /* reply */, &error);
if (!ret) {
goto fail;
}
printf ("decrypted document: ");
if (!print_one_document (coll, &error)) {
goto fail;
}
printf ("\n");
unencrypted_client = mongoc_client_new (
"mongodb://localhost/?appname=client-side-encryption-unencrypted");
unencrypted_coll = mongoc_client_get_collection (
unencrypted_client, ENCRYPTED_DB, ENCRYPTED_COLL);
printf ("encrypted document: ");
if (!print_one_document (unencrypted_coll, &error)) {
goto fail;
}
printf ("\n");
exit_status = EXIT_SUCCESS; fail:
if (error.code) {
fprintf (stderr, "error: %s\n", error.message);
}
bson_free (local_masterkey);
bson_destroy (kms_providers);
mongoc_collection_destroy (keyvault_coll);
bson_destroy (index_keys);
bson_free (index_name);
bson_destroy (create_index_cmd);
bson_json_reader_destroy (reader);
mongoc_auto_encryption_opts_destroy (auto_encryption_opts);
mongoc_collection_destroy (coll);
mongoc_client_destroy (client);
bson_destroy (to_insert);
mongoc_collection_destroy (unencrypted_coll);
mongoc_client_destroy (unencrypted_client);
mongoc_client_destroy (keyvault_client);
bson_destroy (&schema);
bson_destroy (schema_map);
mongoc_cleanup ();
return exit_status; }


Server-Side Field Level Encryption Enforcement

The MongoDB 4.2 server supports using schema validation to enforce encryption of specific fields in a collection. This schema validation will prevent an application from inserting unencrypted values for any fields marked with the "encrypt" JSON schema keyword.

The following example shows how to set up automatic client-side field level encryption using mongoc_client_encryption_t to create a new encryption data key and create a collection with the Automatic Encryption JSON Schema Syntax:

client-side-encryption-server-schema.c

#include <mongoc/mongoc.h>
#include <stdio.h>
#include <stdlib.h>
#include "client-side-encryption-helpers.h"
/* Helper method to create and return a JSON schema to use for encryption.
The caller will use the returned schema for server-side encryption validation.
*/
static bson_t *
create_schema (bson_t *kms_providers,

const char *keyvault_db,
const char *keyvault_coll,
mongoc_client_t *keyvault_client,
bson_error_t *error) {
mongoc_client_encryption_t *client_encryption = NULL;
mongoc_client_encryption_opts_t *client_encryption_opts = NULL;
mongoc_client_encryption_datakey_opts_t *datakey_opts = NULL;
bson_value_t datakey_id = {0};
char *keyaltnames[] = {"mongoc_encryption_example_2"};
bson_t *schema = NULL;
client_encryption_opts = mongoc_client_encryption_opts_new ();
mongoc_client_encryption_opts_set_kms_providers (client_encryption_opts,
kms_providers);
mongoc_client_encryption_opts_set_keyvault_namespace (
client_encryption_opts, keyvault_db, keyvault_coll);
mongoc_client_encryption_opts_set_keyvault_client (client_encryption_opts,
keyvault_client);
client_encryption =
mongoc_client_encryption_new (client_encryption_opts, error);
if (!client_encryption) {
goto fail;
}
/* Create a new data key and json schema for the encryptedField.
* https://dochub.mongodb.org/core/client-side-field-level-encryption-automatic-encryption-rules
*/
datakey_opts = mongoc_client_encryption_datakey_opts_new ();
mongoc_client_encryption_datakey_opts_set_keyaltnames (
datakey_opts, keyaltnames, 1);
if (!mongoc_client_encryption_create_datakey (
client_encryption, "local", datakey_opts, &datakey_id, error)) {
goto fail;
}
/* Create a schema describing that "encryptedField" is a string encrypted
* with the newly created data key using deterministic encryption. */
schema = BCON_NEW ("properties",
"{",
"encryptedField",
"{",
"encrypt",
"{",
"keyId",
"[",
BCON_BIN (datakey_id.value.v_binary.subtype,
datakey_id.value.v_binary.data,
datakey_id.value.v_binary.data_len),
"]",
"bsonType",
"string",
"algorithm",
MONGOC_AEAD_AES_256_CBC_HMAC_SHA_512_DETERMINISTIC,
"}",
"}",
"}",
"bsonType",
"object"); fail:
mongoc_client_encryption_destroy (client_encryption);
mongoc_client_encryption_datakey_opts_destroy (datakey_opts);
mongoc_client_encryption_opts_destroy (client_encryption_opts);
bson_value_destroy (&datakey_id);
return schema; } /* This example demonstrates how to use automatic encryption with a server-side
* schema using the enterprise version of MongoDB */ int main (void) { /* The collection used to store the encryption data keys. */ #define KEYVAULT_DB "encryption" #define KEYVAULT_COLL "__libmongocTestKeyVault" /* The collection used to store the encrypted documents in this example. */ #define ENCRYPTED_DB "test" #define ENCRYPTED_COLL "coll"
int exit_status = EXIT_FAILURE;
bool ret;
uint8_t *local_masterkey = NULL;
uint32_t local_masterkey_len;
bson_t *kms_providers = NULL;
bson_error_t error = {0};
bson_t *index_keys = NULL;
char *index_name = NULL;
bson_t *create_index_cmd = NULL;
bson_json_reader_t *reader = NULL;
bson_t *schema = NULL;
/* The MongoClient used to access the key vault (keyvault_namespace). */
mongoc_client_t *keyvault_client = NULL;
mongoc_collection_t *keyvault_coll = NULL;
mongoc_auto_encryption_opts_t *auto_encryption_opts = NULL;
mongoc_client_t *client = NULL;
mongoc_collection_t *coll = NULL;
bson_t *to_insert = NULL;
mongoc_client_t *unencrypted_client = NULL;
mongoc_collection_t *unencrypted_coll = NULL;
bson_t *create_cmd = NULL;
bson_t *create_cmd_opts = NULL;
mongoc_write_concern_t *wc = NULL;
mongoc_init ();
/* Configure the master key. This must be the same master key that was used
* to create
* the encryption key. */
local_masterkey =
hex_to_bin (getenv ("LOCAL_MASTERKEY"), &local_masterkey_len);
if (!local_masterkey || local_masterkey_len != 96) {
fprintf (stderr,
"Specify LOCAL_MASTERKEY environment variable as a "
"secure random 96 byte hex value.\n");
goto fail;
}
kms_providers = BCON_NEW ("local",
"{",
"key",
BCON_BIN (0, local_masterkey, local_masterkey_len),
"}");
/* Set up the key vault for this example. */
keyvault_client = mongoc_client_new (
"mongodb://localhost/?appname=client-side-encryption-keyvault");
keyvault_coll = mongoc_client_get_collection (
keyvault_client, KEYVAULT_DB, KEYVAULT_COLL);
mongoc_collection_drop (keyvault_coll, NULL);
/* Create a unique index to ensure that two data keys cannot share the same
* keyAltName. This is recommended practice for the key vault. */
index_keys = BCON_NEW ("keyAltNames", BCON_INT32 (1));
index_name = mongoc_collection_keys_to_index_string (index_keys);
create_index_cmd = BCON_NEW ("createIndexes",
KEYVAULT_COLL,
"indexes",
"[",
"{",
"key",
BCON_DOCUMENT (index_keys),
"name",
index_name,
"unique",
BCON_BOOL (true),
"partialFilterExpression",
"{",
"keyAltNames",
"{",
"$exists",
BCON_BOOL (true),
"}",
"}",
"}",
"]");
ret = mongoc_client_command_simple (keyvault_client,
KEYVAULT_DB,
create_index_cmd,
NULL /* read prefs */,
NULL /* reply */,
&error);
if (!ret) {
goto fail;
}
auto_encryption_opts = mongoc_auto_encryption_opts_new ();
mongoc_auto_encryption_opts_set_keyvault_client (auto_encryption_opts,
keyvault_client);
mongoc_auto_encryption_opts_set_keyvault_namespace (
auto_encryption_opts, KEYVAULT_DB, KEYVAULT_COLL);
mongoc_auto_encryption_opts_set_kms_providers (auto_encryption_opts,
kms_providers);
schema = create_schema (
kms_providers, KEYVAULT_DB, KEYVAULT_COLL, keyvault_client, &error);
if (!schema) {
goto fail;
}
client =
mongoc_client_new ("mongodb://localhost/?appname=client-side-encryption");
ret = mongoc_client_enable_auto_encryption (
client, auto_encryption_opts, &error);
if (!ret) {
goto fail;
}
coll = mongoc_client_get_collection (client, ENCRYPTED_DB, ENCRYPTED_COLL);
/* Clear old data */
mongoc_collection_drop (coll, NULL);
/* Create the collection with the encryption JSON Schema. */
create_cmd = BCON_NEW ("create",
ENCRYPTED_COLL,
"validator",
"{",
"$jsonSchema",
BCON_DOCUMENT (schema),
"}");
wc = mongoc_write_concern_new ();
mongoc_write_concern_set_wmajority (wc, 0);
create_cmd_opts = bson_new ();
mongoc_write_concern_append (wc, create_cmd_opts);
ret = mongoc_client_command_with_opts (client,
ENCRYPTED_DB,
create_cmd,
NULL /* read prefs */,
create_cmd_opts,
NULL /* reply */,
&error);
if (!ret) {
goto fail;
}
to_insert = BCON_NEW ("encryptedField", "123456789");
ret = mongoc_collection_insert_one (
coll, to_insert, NULL /* opts */, NULL /* reply */, &error);
if (!ret) {
goto fail;
}
printf ("decrypted document: ");
if (!print_one_document (coll, &error)) {
goto fail;
}
printf ("\n");
unencrypted_client = mongoc_client_new (
"mongodb://localhost/?appname=client-side-encryption-unencrypted");
unencrypted_coll = mongoc_client_get_collection (
unencrypted_client, ENCRYPTED_DB, ENCRYPTED_COLL);
printf ("encrypted document: ");
if (!print_one_document (unencrypted_coll, &error)) {
goto fail;
}
printf ("\n");
/* Expect a server-side error if inserting with the unencrypted collection.
*/
ret = mongoc_collection_insert_one (
unencrypted_coll, to_insert, NULL /* opts */, NULL /* reply */, &error);
if (!ret) {
printf ("insert with unencrypted collection failed: %s\n", error.message);
memset (&error, 0, sizeof (error));
}
exit_status = EXIT_SUCCESS; fail:
if (error.code) {
fprintf (stderr, "error: %s\n", error.message);
}
bson_free (local_masterkey);
bson_destroy (kms_providers);
mongoc_collection_destroy (keyvault_coll);
bson_destroy (index_keys);
bson_free (index_name);
bson_destroy (create_index_cmd);
bson_json_reader_destroy (reader);
mongoc_auto_encryption_opts_destroy (auto_encryption_opts);
mongoc_collection_destroy (coll);
mongoc_client_destroy (client);
bson_destroy (to_insert);
mongoc_collection_destroy (unencrypted_coll);
mongoc_client_destroy (unencrypted_client);
mongoc_client_destroy (keyvault_client);
bson_destroy (schema);
bson_destroy (create_cmd);
bson_destroy (create_cmd_opts);
mongoc_write_concern_destroy (wc);
mongoc_cleanup ();
return exit_status; }


Explicit Encryption

Explicit encryption is a MongoDB community feature and does not use the mongocryptd process. Explicit encryption is provided by the mongoc_client_encryption_t class, for example:

client-side-encryption-explicit.c

#include <mongoc/mongoc.h>
#include <stdio.h>
#include <stdlib.h>
#include "client-side-encryption-helpers.h"
/* This example demonstrates how to use explicit encryption and decryption using

* the community version of MongoDB */ int main (void) { /* The collection used to store the encryption data keys. */ #define KEYVAULT_DB "encryption" #define KEYVAULT_COLL "__libmongocTestKeyVault" /* The collection used to store the encrypted documents in this example. */ #define ENCRYPTED_DB "test" #define ENCRYPTED_COLL "coll"
int exit_status = EXIT_FAILURE;
bool ret;
uint8_t *local_masterkey = NULL;
uint32_t local_masterkey_len;
bson_t *kms_providers = NULL;
bson_error_t error = {0};
bson_t *index_keys = NULL;
char *index_name = NULL;
bson_t *create_index_cmd = NULL;
bson_t *schema = NULL;
mongoc_client_t *client = NULL;
mongoc_collection_t *coll = NULL;
mongoc_collection_t *keyvault_coll = NULL;
bson_t *to_insert = NULL;
bson_t *create_cmd = NULL;
bson_t *create_cmd_opts = NULL;
mongoc_write_concern_t *wc = NULL;
mongoc_client_encryption_t *client_encryption = NULL;
mongoc_client_encryption_opts_t *client_encryption_opts = NULL;
mongoc_client_encryption_datakey_opts_t *datakey_opts = NULL;
char *keyaltnames[] = {"mongoc_encryption_example_3"};
bson_value_t datakey_id = {0};
bson_value_t encrypted_field = {0};
bson_value_t to_encrypt = {0};
mongoc_client_encryption_encrypt_opts_t *encrypt_opts = NULL;
bson_value_t decrypted = {0};
mongoc_init ();
/* Configure the master key. This must be the same master key that was used
* to create the encryption key. */
local_masterkey =
hex_to_bin (getenv ("LOCAL_MASTERKEY"), &local_masterkey_len);
if (!local_masterkey || local_masterkey_len != 96) {
fprintf (stderr,
"Specify LOCAL_MASTERKEY environment variable as a "
"secure random 96 byte hex value.\n");
goto fail;
}
kms_providers = BCON_NEW ("local",
"{",
"key",
BCON_BIN (0, local_masterkey, local_masterkey_len),
"}");
/* The mongoc_client_t used to read/write application data. */
client =
mongoc_client_new ("mongodb://localhost/?appname=client-side-encryption");
coll = mongoc_client_get_collection (client, ENCRYPTED_DB, ENCRYPTED_COLL);
/* Clear old data */
mongoc_collection_drop (coll, NULL);
/* Set up the key vault for this example. */
keyvault_coll =
mongoc_client_get_collection (client, KEYVAULT_DB, KEYVAULT_COLL);
mongoc_collection_drop (keyvault_coll, NULL);
/* Create a unique index to ensure that two data keys cannot share the same
* keyAltName. This is recommended practice for the key vault. */
index_keys = BCON_NEW ("keyAltNames", BCON_INT32 (1));
index_name = mongoc_collection_keys_to_index_string (index_keys);
create_index_cmd = BCON_NEW ("createIndexes",
KEYVAULT_COLL,
"indexes",
"[",
"{",
"key",
BCON_DOCUMENT (index_keys),
"name",
index_name,
"unique",
BCON_BOOL (true),
"partialFilterExpression",
"{",
"keyAltNames",
"{",
"$exists",
BCON_BOOL (true),
"}",
"}",
"}",
"]");
ret = mongoc_client_command_simple (client,
KEYVAULT_DB,
create_index_cmd,
NULL /* read prefs */,
NULL /* reply */,
&error);
if (!ret) {
goto fail;
}
client_encryption_opts = mongoc_client_encryption_opts_new ();
mongoc_client_encryption_opts_set_kms_providers (client_encryption_opts,
kms_providers);
mongoc_client_encryption_opts_set_keyvault_namespace (
client_encryption_opts, KEYVAULT_DB, KEYVAULT_COLL);
/* Set a mongoc_client_t to use for reading/writing to the key vault. This
* can be the same mongoc_client_t used by the main application. */
mongoc_client_encryption_opts_set_keyvault_client (client_encryption_opts,
client);
client_encryption =
mongoc_client_encryption_new (client_encryption_opts, &error);
if (!client_encryption) {
goto fail;
}
/* Create a new data key for the encryptedField.
* https://dochub.mongodb.org/core/client-side-field-level-encryption-automatic-encryption-rules
*/
datakey_opts = mongoc_client_encryption_datakey_opts_new ();
mongoc_client_encryption_datakey_opts_set_keyaltnames (
datakey_opts, keyaltnames, 1);
if (!mongoc_client_encryption_create_datakey (
client_encryption, "local", datakey_opts, &datakey_id, &error)) {
goto fail;
}
/* Explicitly encrypt a field */
encrypt_opts = mongoc_client_encryption_encrypt_opts_new ();
mongoc_client_encryption_encrypt_opts_set_algorithm (
encrypt_opts, MONGOC_AEAD_AES_256_CBC_HMAC_SHA_512_DETERMINISTIC);
mongoc_client_encryption_encrypt_opts_set_keyid (encrypt_opts, &datakey_id);
to_encrypt.value_type = BSON_TYPE_UTF8;
to_encrypt.value.v_utf8.str = "123456789";
to_encrypt.value.v_utf8.len = strlen (to_encrypt.value.v_utf8.str);
ret = mongoc_client_encryption_encrypt (
client_encryption, &to_encrypt, encrypt_opts, &encrypted_field, &error);
if (!ret) {
goto fail;
}
to_insert = bson_new ();
BSON_APPEND_VALUE (to_insert, "encryptedField", &encrypted_field);
ret = mongoc_collection_insert_one (
coll, to_insert, NULL /* opts */, NULL /* reply */, &error);
if (!ret) {
goto fail;
}
printf ("encrypted document: ");
if (!print_one_document (coll, &error)) {
goto fail;
}
printf ("\n");
/* Explicitly decrypt a field */
ret = mongoc_client_encryption_decrypt (
client_encryption, &encrypted_field, &decrypted, &error);
if (!ret) {
goto fail;
}
printf ("decrypted value: %s\n", decrypted.value.v_utf8.str);
exit_status = EXIT_SUCCESS; fail:
if (error.code) {
fprintf (stderr, "error: %s\n", error.message);
}
bson_free (local_masterkey);
bson_destroy (kms_providers);
mongoc_collection_destroy (keyvault_coll);
bson_destroy (index_keys);
bson_free (index_name);
bson_destroy (create_index_cmd);
mongoc_collection_destroy (coll);
mongoc_client_destroy (client);
bson_destroy (to_insert);
bson_destroy (schema);
bson_destroy (create_cmd);
bson_destroy (create_cmd_opts);
mongoc_write_concern_destroy (wc);
mongoc_client_encryption_destroy (client_encryption);
mongoc_client_encryption_datakey_opts_destroy (datakey_opts);
mongoc_client_encryption_opts_destroy (client_encryption_opts);
bson_value_destroy (&encrypted_field);
mongoc_client_encryption_encrypt_opts_destroy (encrypt_opts);
bson_value_destroy (&decrypted);
bson_value_destroy (&datakey_id);
mongoc_cleanup ();
return exit_status; }


Explicit Encryption with Automatic Decryption

Although automatic encryption requires MongoDB 4.2 enterprise or a MongoDB 4.2 Atlas cluster, automatic decryption is supported for all users. To configure automatic decryption without automatic encryption set bypass_auto_encryption=True in mongoc_auto_encryption_opts_t:

client-side-encryption-auto-decryption.c

#include <mongoc/mongoc.h>
#include <stdio.h>
#include <stdlib.h>
#include "client-side-encryption-helpers.h"
/* This example demonstrates how to set up automatic decryption without

* automatic encryption using the community version of MongoDB */ int main (void) { /* The collection used to store the encryption data keys. */ #define KEYVAULT_DB "encryption" #define KEYVAULT_COLL "__libmongocTestKeyVault" /* The collection used to store the encrypted documents in this example. */ #define ENCRYPTED_DB "test" #define ENCRYPTED_COLL "coll"
int exit_status = EXIT_FAILURE;
bool ret;
uint8_t *local_masterkey = NULL;
uint32_t local_masterkey_len;
bson_t *kms_providers = NULL;
bson_error_t error = {0};
bson_t *index_keys = NULL;
char *index_name = NULL;
bson_t *create_index_cmd = NULL;
bson_t *schema = NULL;
mongoc_client_t *client = NULL;
mongoc_collection_t *coll = NULL;
mongoc_collection_t *keyvault_coll = NULL;
bson_t *to_insert = NULL;
bson_t *create_cmd = NULL;
bson_t *create_cmd_opts = NULL;
mongoc_write_concern_t *wc = NULL;
mongoc_client_encryption_t *client_encryption = NULL;
mongoc_client_encryption_opts_t *client_encryption_opts = NULL;
mongoc_client_encryption_datakey_opts_t *datakey_opts = NULL;
char *keyaltnames[] = {"mongoc_encryption_example_4"};
bson_value_t datakey_id = {0};
bson_value_t encrypted_field = {0};
bson_value_t to_encrypt = {0};
mongoc_client_encryption_encrypt_opts_t *encrypt_opts = NULL;
bson_value_t decrypted = {0};
mongoc_auto_encryption_opts_t *auto_encryption_opts = NULL;
mongoc_client_t *unencrypted_client = NULL;
mongoc_collection_t *unencrypted_coll = NULL;
mongoc_init ();
/* Configure the master key. This must be the same master key that was used
* to create the encryption key. */
local_masterkey =
hex_to_bin (getenv ("LOCAL_MASTERKEY"), &local_masterkey_len);
if (!local_masterkey || local_masterkey_len != 96) {
fprintf (stderr,
"Specify LOCAL_MASTERKEY environment variable as a "
"secure random 96 byte hex value.\n");
goto fail;
}
kms_providers = BCON_NEW ("local",
"{",
"key",
BCON_BIN (0, local_masterkey, local_masterkey_len),
"}");
client =
mongoc_client_new ("mongodb://localhost/?appname=client-side-encryption");
auto_encryption_opts = mongoc_auto_encryption_opts_new ();
mongoc_auto_encryption_opts_set_keyvault_namespace (
auto_encryption_opts, KEYVAULT_DB, KEYVAULT_COLL);
mongoc_auto_encryption_opts_set_kms_providers (auto_encryption_opts,
kms_providers);
/* Setting bypass_auto_encryption to true disables automatic encryption but
* keeps the automatic decryption behavior. bypass_auto_encryption will also
* disable spawning mongocryptd */
mongoc_auto_encryption_opts_set_bypass_auto_encryption (auto_encryption_opts,
true);
/* Once bypass_auto_encryption is set, community users can enable auto
* encryption on the client. This will, in fact, only perform automatic
* decryption. */
ret = mongoc_client_enable_auto_encryption (
client, auto_encryption_opts, &error);
if (!ret) {
goto fail;
}
/* Now that automatic decryption is on, we can test it by inserting a
* document with an explicitly encrypted value into the collection. When we
* look up the document later, it should be automatically decrypted for us.
*/
coll = mongoc_client_get_collection (client, ENCRYPTED_DB, ENCRYPTED_COLL);
/* Clear old data */
mongoc_collection_drop (coll, NULL);
/* Set up the key vault for this example. */
keyvault_coll =
mongoc_client_get_collection (client, KEYVAULT_DB, KEYVAULT_COLL);
mongoc_collection_drop (keyvault_coll, NULL);
/* Create a unique index to ensure that two data keys cannot share the same
* keyAltName. This is recommended practice for the key vault. */
index_keys = BCON_NEW ("keyAltNames", BCON_INT32 (1));
index_name = mongoc_collection_keys_to_index_string (index_keys);
create_index_cmd = BCON_NEW ("createIndexes",
KEYVAULT_COLL,
"indexes",
"[",
"{",
"key",
BCON_DOCUMENT (index_keys),
"name",
index_name,
"unique",
BCON_BOOL (true),
"partialFilterExpression",
"{",
"keyAltNames",
"{",
"$exists",
BCON_BOOL (true),
"}",
"}",
"}",
"]");
ret = mongoc_client_command_simple (client,
KEYVAULT_DB,
create_index_cmd,
NULL /* read prefs */,
NULL /* reply */,
&error);
if (!ret) {
goto fail;
}
client_encryption_opts = mongoc_client_encryption_opts_new ();
mongoc_client_encryption_opts_set_kms_providers (client_encryption_opts,
kms_providers);
mongoc_client_encryption_opts_set_keyvault_namespace (
client_encryption_opts, KEYVAULT_DB, KEYVAULT_COLL);
/* The key vault client is used for reading to/from the key vault. This can
* be the same mongoc_client_t used by the application. */
mongoc_client_encryption_opts_set_keyvault_client (client_encryption_opts,
client);
client_encryption =
mongoc_client_encryption_new (client_encryption_opts, &error);
if (!client_encryption) {
goto fail;
}
/* Create a new data key for the encryptedField.
* https://dochub.mongodb.org/core/client-side-field-level-encryption-automatic-encryption-rules
*/
datakey_opts = mongoc_client_encryption_datakey_opts_new ();
mongoc_client_encryption_datakey_opts_set_keyaltnames (
datakey_opts, keyaltnames, 1);
ret = mongoc_client_encryption_create_datakey (
client_encryption, "local", datakey_opts, &datakey_id, &error);
if (!ret) {
goto fail;
}
/* Explicitly encrypt a field. */
encrypt_opts = mongoc_client_encryption_encrypt_opts_new ();
mongoc_client_encryption_encrypt_opts_set_algorithm (
encrypt_opts, MONGOC_AEAD_AES_256_CBC_HMAC_SHA_512_DETERMINISTIC);
mongoc_client_encryption_encrypt_opts_set_keyaltname (
encrypt_opts, "mongoc_encryption_example_4");
to_encrypt.value_type = BSON_TYPE_UTF8;
to_encrypt.value.v_utf8.str = "123456789";
to_encrypt.value.v_utf8.len = strlen (to_encrypt.value.v_utf8.str);
ret = mongoc_client_encryption_encrypt (
client_encryption, &to_encrypt, encrypt_opts, &encrypted_field, &error);
if (!ret) {
goto fail;
}
to_insert = bson_new ();
BSON_APPEND_VALUE (to_insert, "encryptedField", &encrypted_field);
ret = mongoc_collection_insert_one (
coll, to_insert, NULL /* opts */, NULL /* reply */, &error);
if (!ret) {
goto fail;
}
/* When we retrieve the document, any encrypted fields will get automatically
* decrypted by the driver. */
printf ("decrypted document: ");
if (!print_one_document (coll, &error)) {
goto fail;
}
printf ("\n");
unencrypted_client =
mongoc_client_new ("mongodb://localhost/?appname=client-side-encryption");
unencrypted_coll = mongoc_client_get_collection (
unencrypted_client, ENCRYPTED_DB, ENCRYPTED_COLL);
printf ("encrypted document: ");
if (!print_one_document (unencrypted_coll, &error)) {
goto fail;
}
printf ("\n");
exit_status = EXIT_SUCCESS; fail:
if (error.code) {
fprintf (stderr, "error: %s\n", error.message);
}
bson_free (local_masterkey);
bson_destroy (kms_providers);
mongoc_collection_destroy (keyvault_coll);
bson_destroy (index_keys);
bson_free (index_name);
bson_destroy (create_index_cmd);
mongoc_collection_destroy (coll);
mongoc_client_destroy (client);
bson_destroy (to_insert);
bson_destroy (schema);
bson_destroy (create_cmd);
bson_destroy (create_cmd_opts);
mongoc_write_concern_destroy (wc);
mongoc_client_encryption_destroy (client_encryption);
mongoc_client_encryption_datakey_opts_destroy (datakey_opts);
mongoc_client_encryption_opts_destroy (client_encryption_opts);
bson_value_destroy (&encrypted_field);
mongoc_client_encryption_encrypt_opts_destroy (encrypt_opts);
bson_value_destroy (&decrypted);
bson_value_destroy (&datakey_id);
mongoc_collection_destroy (unencrypted_coll);
mongoc_client_destroy (unencrypted_client);
mongoc_auto_encryption_opts_destroy (auto_encryption_opts);
mongoc_cleanup ();
return exit_status; }


API Reference

Initialization and cleanup

Synopsis

Initialize the MongoDB C Driver by calling mongoc_init() exactly once at the beginning of your program. It is responsible for initializing global state such as process counters, SSL, and threading primitives.

Exception to this is mongoc_log_set_handler(), which should be called before mongoc_init() or some log traces would not use your log handling function. See Custom Log Handlers for a detailed example.

Call mongoc_cleanup() exactly once at the end of your program to release all memory and other resources allocated by the driver. You must not call any other MongoDB C Driver functions after mongoc_cleanup(). Note that mongoc_init() does not reinitialize the driver after mongoc_cleanup().

Deprecated feature: automatic initialization and cleanup

On some platforms the driver can automatically call mongoc_init() before main, and call mongoc_cleanup() as the process exits. This is problematic in situations where related libraries also execute cleanup code on shutdown, and it creates inconsistent rules across platforms. Therefore the automatic initialization and cleanup feature is deprecated, and will be dropped in version 2.0. Meanwhile, for backward compatibility, the feature is enabled by default on platforms where it is available.

For portable, future-proof code, always call mongoc_init() and mongoc_cleanup() yourself, and configure the driver like:

cmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF


Logging

MongoDB C driver Logging Abstraction

Synopsis

typedef enum {

MONGOC_LOG_LEVEL_ERROR,
MONGOC_LOG_LEVEL_CRITICAL,
MONGOC_LOG_LEVEL_WARNING,
MONGOC_LOG_LEVEL_MESSAGE,
MONGOC_LOG_LEVEL_INFO,
MONGOC_LOG_LEVEL_DEBUG,
MONGOC_LOG_LEVEL_TRACE, } mongoc_log_level_t; #define MONGOC_ERROR(...) #define MONGOC_CRITICAL(...) #define MONGOC_WARNING(...) #define MONGOC_MESSAGE(...) #define MONGOC_INFO(...) #define MONGOC_DEBUG(...) typedef void (*mongoc_log_func_t) (mongoc_log_level_t log_level,
const char *log_domain,
const char *message,
void *user_data); void mongoc_log_set_handler (mongoc_log_func_t log_func, void *user_data); void mongoc_log (mongoc_log_level_t log_level,
const char *log_domain,
const char *format,
...) BSON_GNUC_PRINTF (3, 4); const char * mongoc_log_level_str (mongoc_log_level_t log_level); void mongoc_log_default_handler (mongoc_log_level_t log_level,
const char *log_domain,
const char *message,
void *user_data); void mongoc_log_trace_enable (void); void mongoc_log_trace_disable (void);


The MongoDB C driver comes with an abstraction for logging that you can use in your application, or integrate with an existing logging system.

Macros

To make logging a little less painful, various helper macros are provided. See the following example.

#undef MONGOC_LOG_DOMAIN
#define MONGOC_LOG_DOMAIN "my-custom-domain"
MONGOC_WARNING ("An error occurred: %s", strerror (errno));


Custom Log Handlers

You can override the handler with mongoc_log_set_handler(). Your handler function is called in a mutex for thread safety.

For example, you could register a custom handler to suppress messages at INFO level and below:

void
my_logger (mongoc_log_level_t log_level,

const char *log_domain,
const char *message,
void *user_data) {
/* smaller values are more important */
if (log_level < MONGOC_LOG_LEVEL_INFO) {
mongoc_log_default_handler (log_level, log_domain, message, user_data);
} } int main (int argc, char *argv[]) {
mongoc_log_set_handler (my_logger, NULL);
mongoc_init ();
/* ... your code ... */
mongoc_cleanup ();
return 0; }


Note that in the example above mongoc_log_set_handler() is called before mongoc_init(). Otherwise, some log traces could not be processed by the log handler.

To restore the default handler:

mongoc_log_set_handler (mongoc_log_default_handler, NULL);


Disable logging

To disable all logging, including warnings, critical messages and errors, provide an empty log handler:

mongoc_log_set_handler (NULL, NULL);


Tracing

If compiling your own copy of the MongoDB C driver, consider configuring with -DENABLE_TRACING=ON to enable function tracing and hex dumps of network packets to STDERR and STDOUT during development and debugging.

This is especially useful when debugging what may be going on internally in the driver.

Trace messages can be enabled and disabled by calling mongoc_log_trace_enable() and mongoc_log_trace_disable()

NOTE:

Compiling the driver with -DENABLE_TRACING=ON will affect its performance. Disabling tracing with mongoc_log_trace_disable() significantly reduces the overhead, but cannot remove it completely.


« libmongoc

Error Reporting

Description

Many C Driver functions report errors by returning false or -1 and filling out a bson_error_t structure with an error domain, error code, and message. Use domain to determine which subsystem generated the error, and code for the specific error. message is a human-readable error description.

SEE ALSO:

Handling Errors in libbson.



Domain Code Description
MONGOC_ERROR_CLIENT MONGOC_ERROR_CLIENT_TOO_BIG You tried to send a message larger than the server's max message size.
MONGOC_ERROR_CLIENT_AUTHENTICATE Wrong credentials, or failure sending or receiving authentication messages.
MONGOC_ERROR_CLIENT_NO_ACCEPTABLE_PEER You tried an TLS connection but the driver was not built with TLS.
MONGOC_ERROR_CLIENT_IN_EXHAUST You began iterating an exhaust cursor, then tried to begin another operation with the same mongoc_client_t.
MONGOC_ERROR_CLIENT_SESSION_FAILURE Failure related to creating or using a logical session.
MONGOC_ERROR_CLIENT_INVALID_ENCRYPTION_ARG Failure related to arguments passed when initializing Client-Side Field Level Encryption.
MONGOC_ERROR_CLIENT_INVALID_ENCRYPTION_STATE Failure related to Client-Side Field Level Encryption.
MONGOC_ERROR_CLIENT_INVALID_LOAD_BALANCER You attempted to connect to a MongoDB server behind a load balancer, but the server does not advertize load balanced support.
MONGOC_ERROR_STREAM MONGOC_ERROR_STREAM_NAME_RESOLUTION DNS failure.
MONGOC_ERROR_STREAM_SOCKET Timeout communicating with server, or connection closed.
MONGOC_ERROR_STREAM_CONNECT Failed to connect to server.
MONGOC_ERROR_PROTOCOL MONGOC_ERROR_PROTOCOL_INVALID_REPLY Corrupt response from server.
MONGOC_ERROR_PROTOCOL_BAD_WIRE_VERSION The server version is too old or too new to communicate with the driver.
MONGOC_ERROR_CURSOR MONGOC_ERROR_CURSOR_INVALID_CURSOR You passed bad arguments to mongoc_collection_find_with_opts(), or you called mongoc_cursor_next() on a completed or failed cursor, or the cursor timed out on the server.
MONGOC_ERROR_CHANGE_STREAM_NO_RESUME_TOKEN A resume token was not returned in a document found with mongoc_change_stream_next()
MONGOC_ERROR_QUERY MONGOC_ERROR_QUERY_FAILURE Error API Version 1: Server error from command or query. The server error message is in message.
MONGOC_ERROR_SERVER MONGOC_ERROR_QUERY_FAILURE Error API Version 2: Server error from command or query. The server error message is in message.
MONGOC_ERROR_SASL A SASL error code. man sasl_errors for a list of codes.
MONGOC_ERROR_BSON MONGOC_ERROR_BSON_INVALID You passed an invalid or oversized BSON document as a parameter, or called mongoc_collection_create_index() with invalid keys, or the server reply was corrupt.
MONGOC_ERROR_NAMESPACE MONGOC_ERROR_NAMESPACE_INVALID You tried to create a collection with an invalid name.
MONGOC_ERROR_COMMAND MONGOC_ERROR_COMMAND_INVALID_ARG Many functions set this error code when passed bad parameters. Print the error message for details.
MONGOC_ERROR_PROTOCOL_BAD_WIRE_VERSION You tried to use a command option the server does not support.
MONGOC_ERROR_DUPLICATE_KEY An insert or update failed because because of a duplicate _id or other unique-index violation.
MONGOC_ERROR_MAX_TIME_MS_EXPIRED The operation failed because maxTimeMS expired.
MONGOC_ERROR_SERVER_SELECTION_INVALID_ID The serverId option for an operation conflicts with the pinned server for that operation's client session (denoted by the sessionId option).
MONGOC_ERROR_COMMAND Error code from server. Error API Version 1: Server error from a command. The server error message is in message.
MONGOC_ERROR_SERVER Error code from server. Error API Version 2: Server error from a command. The server error message is in message.
MONGOC_ERROR_COLLECTION MONGOC_ERROR_COLLECTION_INSERT_FAILED, MONGOC_ERROR_COLLECTION_UPDATE_FAILED, MONGOC_ERROR_COLLECTION_DELETE_FAILED. Invalid or empty input to mongoc_collection_insert_one(), mongoc_collection_insert_bulk(), mongoc_collection_update_one(), mongoc_collection_update_many(), mongoc_collection_replace_one(), mongoc_collection_delete_one(), or mongoc_collection_delete_many().
MONGOC_ERROR_COLLECTION Error code from server. Error API Version 1: Server error from mongoc_collection_insert_one(), mongoc_collection_insert_bulk(), mongoc_collection_update_one(), mongoc_collection_update_many(), mongoc_collection_replace_one(),
MONGOC_ERROR_SERVER Error code from server. Error API Version 2: Server error from mongoc_collection_insert_one(), mongoc_collection_insert_bulk(), mongoc_collection_update_one(), mongoc_collection_update_many(), mongoc_collection_replace_one(),
MONGOC_ERROR_GRIDFS MONGOC_ERROR_GRIDFS_CHUNK_MISSING The GridFS file is missing a document in its chunks collection.
MONGOC_ERROR_GRIDFS_CORRUPT A data inconsistency was detected in GridFS.
MONGOC_ERROR_GRIDFS_INVALID_FILENAME You passed a NULL filename to mongoc_gridfs_remove_by_filename().
MONGOC_ERROR_GRIDFS_PROTOCOL_ERROR You called mongoc_gridfs_file_set_id() after mongoc_gridfs_file_save(), or tried to write on a closed GridFS stream.
MONGOC_ERROR_GRIDFS_BUCKET_FILE_NOT_FOUND A GridFS file is missing from files collection.
MONGOC_ERROR_GRIDFS_BUCKET_STREAM An error occurred on a stream created from a GridFS operation like mongoc_gridfs_bucket_upload_from_stream().
MONGOC_ERROR_SCRAM MONGOC_ERROR_SCRAM_PROTOCOL_ERROR Failure in SCRAM-SHA-1 authentication.
MONGOC_ERROR_SERVER_SELECTION MONGOC_ERROR_SERVER_SELECTION_FAILURE No replica set member or mongos is available, or none matches your read preference, or you supplied an invalid mongoc_read_prefs_t.
MONGOC_ERROR_WRITE_CONCERN Error code from server. There was a write concern error or timeout from the server.
MONGOC_ERROR_TRANSACTION MONGOC_ERROR_TRANSACTION_INVALID You attempted to start a transaction when one is already in progress, or commit or abort when there is no transaction.
MONGOC_ERROR_CLIENT_SIDE_ENCRYPTION Error code produced by libmongocrypt. An error occurred in the library responsible for Client Side Encryption

Error Labels

In some cases your application must make decisions based on what category of error the driver has returned, but these categories do not correspond perfectly to an error domain or code. In such cases, error labels provide a reliable way to determine how your application should respond to an error.

Any C Driver function that has a bson_t out-parameter named reply may include error labels to the reply, in the form of a BSON field named "errorLabels" containing an array of strings:

{ "errorLabels": [ "TransientTransactionError" ] }


Use mongoc_error_has_label() to test if a reply contains a specific label. See mongoc_client_session_start_transaction() for example code that demonstrates the use of error labels in application logic.

The following error labels are currently defined. Future versions of MongoDB may introduce new labels.

TransientTransactionError

Within a multi-document transaction, certain errors can leave the transaction in an unknown or aborted state. These include write conflicts, primary stepdowns, and network errors. In response, the application should abort the transaction and try the same sequence of operations again in a new transaction.

UnknownTransactionCommitResult

When mongoc_client_session_commit_transaction() encounters a network error or certain server errors, it is not known whether the transaction was committed. Applications should attempt to commit the transaction again until: the commit succeeds, the commit fails with an error not labeled "UnknownTransactionCommitResult", or the application chooses to give up.

Setting the Error API Version

The driver's error reporting began with a design flaw: when the error domain is MONGOC_ERROR_COLLECTION, MONGOC_ERROR_QUERY, or MONGOC_ERROR_COMMAND, the error code might originate from the server or the driver. An application cannot always know where an error originated, and therefore cannot tell what the code means.

For example, if mongoc_collection_update_one() sets the error's domain to MONGOC_ERROR_COLLECTION and its code to 24, the application cannot know whether 24 is the generic driver error code MONGOC_ERROR_COLLECTION_UPDATE_FAILED or the specific server error code "LockTimeout".

To fix this flaw while preserving backward compatibility, the C Driver 1.4 introduces "Error API Versions". Version 1, the default Error API Version, maintains the flawed behavior. Version 2 adds a new error domain, MONGOC_ERROR_SERVER. In Version 2, error codes originating on the server always have error domain MONGOC_ERROR_SERVER or MONGOC_ERROR_WRITE_CONCERN. When the driver uses Version 2 the application can always determine the origin and meaning of error codes. New applications should use Version 2, and existing applications should be updated to use Version 2 as well.

Error Source API Version 1 API Version 2
mongoc_cursor_error() MONGOC_ERROR_QUERY MONGOC_ERROR_SERVER
mongoc_client_command_with_opts(), mongoc_database_command_with_opts(), and other command functions MONGOC_ERROR_QUERY MONGOC_ERROR_SERVER
mongoc_collection_count_with_opts() mongoc_client_get_database_names_with_opts(), and other command helper functions MONGOC_ERROR_QUERY MONGOC_ERROR_SERVER
mongoc_collection_insert_one() mongoc_collection_insert_bulk() mongoc_collection_update_one() mongoc_collection_update_many() mongoc_collection_replace_one() mongoc_collection_delete_one() mongoc_collection_delete_many() MONGOC_ERROR_COMMAND MONGOC_ERROR_SERVER
mongoc_bulk_operation_execute() MONGOC_ERROR_COMMAND MONGOC_ERROR_SERVER
Write-concern timeout MONGOC_ERROR_WRITE_CONCERN MONGOC_ERROR_WRITE_CONCERN

The Error API Versions are defined with MONGOC_ERROR_API_VERSION_LEGACY and MONGOC_ERROR_API_VERSION_2. Set the version with mongoc_client_set_error_api() or mongoc_client_pool_set_error_api().

SEE ALSO:

MongoDB Server Error Codes



Object Lifecycle

This page documents the order of creation and destruction for libmongoc's main struct types.

Clients and pools

Call mongoc_init() once, before calling any other libmongoc functions, and call mongoc_cleanup() once before your program exits.

A program that uses libmongoc from multiple threads should create a mongoc_client_pool_t with mongoc_client_pool_new(). Each thread acquires a mongoc_client_t from the pool with mongoc_client_pool_pop() and returns it with mongoc_client_pool_push() when the thread is finished using it. To destroy the pool, first return all clients, then call mongoc_client_pool_destroy().

If your program uses libmongoc from only one thread, create a mongoc_client_t directly with mongoc_client_new() or mongoc_client_new_from_uri(). Destroy it with mongoc_client_destroy().

You can create a mongoc_database_t or mongoc_collection_t from a mongoc_client_t, and create a mongoc_cursor_t or mongoc_bulk_operation_t from a mongoc_collection_t.

Each of these objects must be destroyed before the client they were created from, but their lifetimes are otherwise independent.

GridFS objects

You can create a mongoc_gridfs_t from a mongoc_client_t, create a mongoc_gridfs_file_t or mongoc_gridfs_file_list_t from a mongoc_gridfs_t, create a mongoc_gridfs_file_t from a mongoc_gridfs_file_list_t, and create a mongoc_stream_t from a mongoc_gridfs_file_t.

Each of these objects depends on the object it was created from. Always destroy GridFS objects in the reverse of the order they were created. The sole exception is that a mongoc_gridfs_file_t need not be destroyed before the mongoc_gridfs_file_list_t it was created from.

GridFS bucket objects

Create mongoc_gridfs_bucket_t with a mongoc_database_t derived from a mongoc_client_t. The mongoc_database_t is independent from the mongoc_gridfs_bucket_t. But the mongoc_client_t must outlive the mongoc_gridfs_bucket_t.

A mongoc_stream_t may be created from the mongoc_gridfs_bucket_t. The mongoc_gridfs_bucket_t must outlive the mongoc_stream_t.

Sessions

Start a session with mongoc_client_start_session(), use the session for a sequence of operations and multi-document transactions, then free it with mongoc_client_session_destroy(). Any mongoc_cursor_t or mongoc_change_stream_t using a session must be destroyed before the session, and a session must be destroyed before the mongoc_client_t it came from.

By default, sessions are causally consistent. To disable causal consistency, before starting a session create a mongoc_session_opt_t with mongoc_session_opts_new() and call mongoc_session_opts_set_causal_consistency(), then free the struct with mongoc_session_opts_destroy().

Unacknowledged writes are prohibited with sessions.

A mongoc_client_session_t must be used by only one thread at a time. Due to session pooling, mongoc_client_start_session() may return a session that has been idle for some time and is about to be closed after its idle timeout. Use the session within one minute of acquiring it to refresh the session and avoid a timeout.

Client Side Encryption

When configuring a mongoc_client_t for automatic encryption via mongoc_client_enable_auto_encryption(), if a separate key vault client is set in the options (via mongoc_auto_encryption_opts_set_keyvault_client()) the key vault client must outlive the encrypted client.

When configuring a mongoc_client_pool_t for automatic encryption via mongoc_client_pool_enable_auto_encryption(), if a separate key vault client pool is set in the options (via mongoc_auto_encryption_opts_set_keyvault_client_pool()) the key vault client pool must outlive the encrypted client pool.

When creating a mongoc_client_encryption_t, the configured key vault client (set via mongoc_client_encryption_opts_set_keyvault_client()) must outlive the mongoc_client_encryption_t.

GridFS

The C driver includes two APIs for GridFS.

The older API consists of mongoc_gridfs_t and its derivatives. It contains deprecated API, does not support read preferences, and is not recommended in new applications. It does not conform to the MongoDB GridFS specification.

The newer API consists of mongoc_gridfs_bucket_t and allows uploading/downloading through derived mongoc_stream_t objects. It conforms to the MongoDB GridFS specification.

There is not always a straightforward upgrade path from an application built with mongoc_gridfs_t to mongoc_gridfs_bucket_t (e.g. a mongoc_gridfs_file_t provides functions to seek but mongoc_stream_t does not). But users are encouraged to upgrade when possible.

mongoc_auto_encryption_opts_t

Options for enabling automatic encryption and decryption for Client-Side Field Level Encryption.

Synopsis

typedef struct _mongoc_auto_encryption_opts_t mongoc_auto_encryption_opts_t;


SEE ALSO:

The guide for Using Client-Side Field Level Encryption



mongoc_bulk_operation_t

Bulk Write Operations

Synopsis

typedef struct _mongoc_bulk_operation_t mongoc_bulk_operation_t;


The opaque type mongoc_bulk_operation_t provides an abstraction for submitting multiple write operations as a single batch.

After adding all of the write operations to the mongoc_bulk_operation_t, call mongoc_bulk_operation_execute() to execute the operation.

WARNING:

It is only valid to call mongoc_bulk_operation_execute() once. The mongoc_bulk_operation_t must be destroyed afterwards.


SEE ALSO:

Bulk Write Operations



mongoc_change_stream_t

Synopsis

#include <mongoc/mongoc.h>
typedef struct _mongoc_change_stream_t mongoc_change_stream_t;


mongoc_change_stream_t is a handle to a change stream. A collection change stream can be obtained using mongoc_collection_watch().

It is recommended to use a mongoc_change_stream_t and its functions instead of a raw aggregation with a $changeStream stage. For more information see the MongoDB Manual Entry on Change Streams.

Example

example-collection-watch.c

#include <mongoc/mongoc.h>
int
main ()
{

bson_t empty = BSON_INITIALIZER;
const bson_t *doc;
bson_t *to_insert = BCON_NEW ("x", BCON_INT32 (1));
const bson_t *err_doc;
bson_error_t error;
const char *uri_string;
mongoc_uri_t *uri;
mongoc_client_t *client;
mongoc_collection_t *coll;
mongoc_change_stream_t *stream;
mongoc_write_concern_t *wc = mongoc_write_concern_new ();
bson_t opts = BSON_INITIALIZER;
bool r;
mongoc_init ();
uri_string = "mongodb://"
"localhost:27017,localhost:27018,localhost:"
"27019/db?replicaSet=rs0";
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
coll = mongoc_client_get_collection (client, "db", "coll");
stream = mongoc_collection_watch (coll, &empty, NULL);
mongoc_write_concern_set_wmajority (wc, 10000);
mongoc_write_concern_append (wc, &opts);
r = mongoc_collection_insert_one (coll, to_insert, &opts, NULL, &error);
if (!r) {
fprintf (stderr, "Error: %s\n", error.message);
return EXIT_FAILURE;
}
while (mongoc_change_stream_next (stream, &doc)) {
char *as_json = bson_as_relaxed_extended_json (doc, NULL);
fprintf (stderr, "Got document: %s\n", as_json);
bson_free (as_json);
}
if (mongoc_change_stream_error_document (stream, &error, &err_doc)) {
if (!bson_empty (err_doc)) {
fprintf (stderr,
"Server Error: %s\n",
bson_as_relaxed_extended_json (err_doc, NULL));
} else {
fprintf (stderr, "Client Error: %s\n", error.message);
}
return EXIT_FAILURE;
}
bson_destroy (to_insert);
mongoc_write_concern_destroy (wc);
bson_destroy (&opts);
mongoc_change_stream_destroy (stream);
mongoc_collection_destroy (coll);
mongoc_uri_destroy (uri);
mongoc_client_destroy (client);
mongoc_cleanup ();
return EXIT_SUCCESS; }


Starting and Resuming

All watch functions accept several options to indicate where a change stream should start returning changes from: resumeAfter, startAfter, and startAtOperationTime.

All changes returned by mongoc_change_stream_next() include a resume token in the _id field. MongoDB 4.2 also includes an additional resume token in each "aggregate" and "getMore" command response, which points to the end of that response's batch. The current token is automatically cached by libmongoc. In the event of an error, libmongoc attempts to recreate the change stream starting where it left off by passing the cached resume token. libmongoc only attempts to resume once, but client applications can access the cached resume token with mongoc_change_stream_get_resume_token() and use it for their own resume logic by passing it as either the resumeAfter or startAfter option.

Additionally, change streams can start returning changes at an operation time by using the startAtOperationTime field. This can be the timestamp returned in the operationTime field of a command reply.

resumeAfter, startAfter, and startAtOperationTime are mutually exclusive options. Setting more than one will result in a server error.

The following example implements custom resuming logic, persisting the resume token in a file.

example-resume.c

#include <mongoc/mongoc.h>
/* An example implementation of custom resume logic in a change stream.
* example-resume starts a client-wide change stream and persists the resume
* token in a file "resume-token.json". On restart, if "resume-token.json"
* exists, the change stream starts watching after the persisted resume token.
*
* This behavior allows a user to exit example-resume, and restart it later
* without missing any change events.
*/
#include <unistd.h>
static const char *RESUME_TOKEN_PATH = "resume-token.json";
static bool
_save_resume_token (const bson_t *doc)
{

FILE *file_stream;
bson_iter_t iter;
bson_t resume_token_doc;
char *as_json = NULL;
size_t as_json_len;
ssize_t r, n_written;
const bson_value_t *resume_token;
if (!bson_iter_init_find (&iter, doc, "_id")) {
fprintf (stderr, "reply does not contain operationTime.");
return false;
}
resume_token = bson_iter_value (&iter);
/* store the resume token in a document, { resumeAfter: <resume token> }
* which we can later append easily. */
file_stream = fopen (RESUME_TOKEN_PATH, "w+");
if (!file_stream) {
fprintf (stderr, "failed to open %s for writing\n", RESUME_TOKEN_PATH);
return false;
}
bson_init (&resume_token_doc);
BSON_APPEND_VALUE (&resume_token_doc, "resumeAfter", resume_token);
as_json = bson_as_canonical_extended_json (&resume_token_doc, &as_json_len);
bson_destroy (&resume_token_doc);
n_written = 0;
while (n_written < as_json_len) {
r = fwrite ((void *) (as_json + n_written),
sizeof (char),
as_json_len - n_written,
file_stream);
if (r == -1) {
fprintf (stderr, "failed to write to %s\n", RESUME_TOKEN_PATH);
bson_free (as_json);
fclose (file_stream);
return false;
}
n_written += r;
}
bson_free (as_json);
fclose (file_stream);
return true; } bool _load_resume_token (bson_t *opts) {
bson_error_t error;
bson_json_reader_t *reader;
bson_t doc;
/* if the file does not exist, skip. */
if (-1 == access (RESUME_TOKEN_PATH, R_OK)) {
return true;
}
reader = bson_json_reader_new_from_file (RESUME_TOKEN_PATH, &error);
if (!reader) {
fprintf (stderr,
"failed to open %s for reading: %s\n",
RESUME_TOKEN_PATH,
error.message);
return false;
}
bson_init (&doc);
if (-1 == bson_json_reader_read (reader, &doc, &error)) {
fprintf (stderr, "failed to read doc from %s\n", RESUME_TOKEN_PATH);
bson_destroy (&doc);
bson_json_reader_destroy (reader);
return false;
}
printf ("found cached resume token in %s, resuming change stream.\n",
RESUME_TOKEN_PATH);
bson_concat (opts, &doc);
bson_destroy (&doc);
bson_json_reader_destroy (reader);
return true; } int main () {
int exit_code = EXIT_FAILURE;
const char *uri_string;
mongoc_uri_t *uri = NULL;
bson_error_t error;
mongoc_client_t *client = NULL;
bson_t pipeline = BSON_INITIALIZER;
bson_t opts = BSON_INITIALIZER;
mongoc_change_stream_t *stream = NULL;
const bson_t *doc;
const int max_time = 30; /* max amount of time, in seconds, that
mongoc_change_stream_next can block. */
mongoc_init ();
uri_string = "mongodb://localhost:27017/db?replicaSet=rs0";
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
goto cleanup;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
goto cleanup;
}
if (!_load_resume_token (&opts)) {
goto cleanup;
}
BSON_APPEND_INT64 (&opts, "maxAwaitTimeMS", max_time * 1000);
printf ("listening for changes on the client (max %d seconds).\n", max_time);
stream = mongoc_client_watch (client, &pipeline, &opts);
while (mongoc_change_stream_next (stream, &doc)) {
char *as_json;
as_json = bson_as_canonical_extended_json (doc, NULL);
printf ("change received: %s\n", as_json);
bson_free (as_json);
if (!_save_resume_token (doc)) {
goto cleanup;
}
}
exit_code = EXIT_SUCCESS; cleanup:
mongoc_uri_destroy (uri);
bson_destroy (&pipeline);
bson_destroy (&opts);
mongoc_change_stream_destroy (stream);
mongoc_client_destroy (client);
mongoc_cleanup ();
return exit_code; }


The following example shows using startAtOperationTime to synchronize a change stream with another operation.

example-start-at-optime.c

/* An example of starting a change stream with startAtOperationTime. */
#include <mongoc/mongoc.h>
int
main ()
{

int exit_code = EXIT_FAILURE;
const char *uri_string;
mongoc_uri_t *uri = NULL;
bson_error_t error;
mongoc_client_t *client = NULL;
mongoc_collection_t *coll = NULL;
bson_t pipeline = BSON_INITIALIZER;
bson_t opts = BSON_INITIALIZER;
mongoc_change_stream_t *stream = NULL;
bson_iter_t iter;
const bson_t *doc;
bson_value_t cached_operation_time = {0};
int i;
bool r;
mongoc_init ();
uri_string = "mongodb://localhost:27017/db?replicaSet=rs0";
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
goto cleanup;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
goto cleanup;
}
/* insert five documents. */
coll = mongoc_client_get_collection (client, "db", "coll");
for (i = 0; i < 5; i++) {
bson_t reply;
bson_t *insert_cmd = BCON_NEW ("insert",
"coll",
"documents",
"[",
"{",
"x",
BCON_INT64 (i),
"}",
"]");
r = mongoc_collection_write_command_with_opts (
coll, insert_cmd, NULL, &reply, &error);
bson_destroy (insert_cmd);
if (!r) {
bson_destroy (&reply);
fprintf (stderr, "failed to insert: %s\n", error.message);
goto cleanup;
}
if (i == 0) {
/* cache the operation time in the first reply. */
if (bson_iter_init_find (&iter, &reply, "operationTime")) {
bson_value_copy (bson_iter_value (&iter), &cached_operation_time);
} else {
fprintf (stderr, "reply does not contain operationTime.");
bson_destroy (&reply);
goto cleanup;
}
}
bson_destroy (&reply);
}
/* start a change stream at the first returned operationTime. */
BSON_APPEND_VALUE (&opts, "startAtOperationTime", &cached_operation_time);
stream = mongoc_collection_watch (coll, &pipeline, &opts);
/* since the change stream started at the operation time of the first
* insert, the five inserts are returned. */
printf ("listening for changes on db.coll:\n");
while (mongoc_change_stream_next (stream, &doc)) {
char *as_json;
as_json = bson_as_canonical_extended_json (doc, NULL);
printf ("change received: %s\n", as_json);
bson_free (as_json);
}
exit_code = EXIT_SUCCESS; cleanup:
mongoc_uri_destroy (uri);
bson_destroy (&pipeline);
bson_destroy (&opts);
if (cached_operation_time.value_type) {
bson_value_destroy (&cached_operation_time);
}
mongoc_change_stream_destroy (stream);
mongoc_collection_destroy (coll);
mongoc_client_destroy (client);
mongoc_cleanup ();
return exit_code; }


mongoc_client_encryption_t

Synopsis

typedef struct _mongoc_client_encryption_t mongoc_client_encryption_t;


mongoc_client_encryption_t provides utility functions for Client-Side Field Level Encryption. See the guide for Using Client-Side Field Level Encryption.

Thread Safety

mongoc_client_encryption_t is NOT thread-safe and should only be used in the same thread as the mongoc_client_t that is configured via mongoc_client_encryption_opts_set_keyvault_client().

Lifecycle

The key vault client, configured via mongoc_client_encryption_opts_set_keyvault_client(), must outlive the mongoc_client_encryption_t.

SEE ALSO:

mongoc_client_enable_auto_encryption()

mongoc_client_pool_enable_auto_encryption()

The guide for Using Client-Side Field Level Encryption for libmongoc

The MongoDB Manual for Client-Side Field Level Encryption



mongoc_client_encryption_datakey_opts_t

Synopsis

typedef struct _mongoc_client_encryption_datakey_opts_t mongoc_client_encryption_datakey_opts_t;


Used to set options for mongoc_client_encryption_create_datakey().

SEE ALSO:

mongoc_client_encryption_create_datakey()



mongoc_client_encryption_rewrap_many_datakey_result_t

Synopsis

typedef struct _mongoc_client_encryption_rewrap_many_datakey_result_t

mongoc_client_encryption_rewrap_many_datakey_result_t;


Used to access the result of mongoc_client_encryption_rewrap_many_datakey().

SEE ALSO:

mongoc_client_encryption_rewrap_many_datakey()



mongoc_client_encryption_encrypt_opts_t

Synopsis

typedef struct _mongoc_client_encryption_encrypt_opts_t mongoc_client_encryption_encrypt_opts_t;


Used to set options for mongoc_client_encryption_encrypt().

SEE ALSO:

mongoc_client_encryption_encrypt()



mongoc_client_encryption_opts_t

Synopsis

typedef struct _mongoc_client_encryption_opts_t mongoc_client_encryption_opts_t;


Used to set options for mongoc_client_encryption_new().

SEE ALSO:

mongoc_client_encryption_new()



mongoc_client_pool_t

A connection pool for multi-threaded programs. See Connection Pooling.

Synopsis

typedef struct _mongoc_client_pool_t mongoc_client_pool_t


mongoc_client_pool_t is the basis for multi-threading in the MongoDB C driver. Since mongoc_client_t structures are not thread-safe, this structure is used to retrieve a new mongoc_client_t for a given thread. This structure is thread-safe, except for its destructor method, mongoc_client_pool_destroy(), which is not thread-safe and must only be called from one thread.

Example

example-pool.c

/* gcc example-pool.c -o example-pool $(pkg-config --cflags --libs

* libmongoc-1.0) */ /* ./example-pool [CONNECTION_STRING] */ #include <mongoc/mongoc.h> #include <pthread.h> #include <stdio.h> static pthread_mutex_t mutex; static bool in_shutdown = false; static void * worker (void *data) {
mongoc_client_pool_t *pool = data;
mongoc_client_t *client;
bson_t ping = BSON_INITIALIZER;
bson_error_t error;
bool r;
BSON_APPEND_INT32 (&ping, "ping", 1);
while (true) {
client = mongoc_client_pool_pop (pool);
/* Do something with client. If you are writing an HTTP server, you
* probably only want to hold onto the client for the portion of the
* request performing database queries.
*/
r = mongoc_client_command_simple (
client, "admin", &ping, NULL, NULL, &error);
if (!r) {
fprintf (stderr, "%s\n", error.message);
}
mongoc_client_pool_push (pool, client);
pthread_mutex_lock (&mutex);
if (in_shutdown || !r) {
pthread_mutex_unlock (&mutex);
break;
}
pthread_mutex_unlock (&mutex);
}
bson_destroy (&ping);
return NULL; } int main (int argc, char *argv[]) {
const char *uri_string = "mongodb://127.0.0.1/?appname=pool-example";
mongoc_uri_t *uri;
bson_error_t error;
mongoc_client_pool_t *pool;
pthread_t threads[10];
unsigned i;
void *ret;
pthread_mutex_init (&mutex, NULL);
mongoc_init ();
if (argc > 1) {
uri_string = argv[1];
}
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
pool = mongoc_client_pool_new (uri);
mongoc_client_pool_set_error_api (pool, 2);
for (i = 0; i < 10; i++) {
pthread_create (&threads[i], NULL, worker, pool);
}
sleep (10);
pthread_mutex_lock (&mutex);
in_shutdown = true;
pthread_mutex_unlock (&mutex);
for (i = 0; i < 10; i++) {
pthread_join (threads[i], &ret);
}
mongoc_client_pool_destroy (pool);
mongoc_uri_destroy (uri);
mongoc_cleanup ();
return EXIT_SUCCESS; }


mongoc_client_session_t

Use a session for a sequence of operations, optionally with causal consistency. See the MongoDB Manual Entry for Causal Consistency.

Synopsis

Start a session with mongoc_client_start_session(), use the session for a sequence of operations and multi-document transactions, then free it with mongoc_client_session_destroy(). Any mongoc_cursor_t or mongoc_change_stream_t using a session must be destroyed before the session, and a session must be destroyed before the mongoc_client_t it came from.

By default, sessions are causally consistent. To disable causal consistency, before starting a session create a mongoc_session_opt_t with mongoc_session_opts_new() and call mongoc_session_opts_set_causal_consistency(), then free the struct with mongoc_session_opts_destroy().

Unacknowledged writes are prohibited with sessions.

A mongoc_client_session_t must be used by only one thread at a time. Due to session pooling, mongoc_client_start_session() may return a session that has been idle for some time and is about to be closed after its idle timeout. Use the session within one minute of acquiring it to refresh the session and avoid a timeout.

Example

example-session.c

/* gcc example-session.c -o example-session \

* $(pkg-config --cflags --libs libmongoc-1.0) */ /* ./example-session [CONNECTION_STRING] */ #include <stdio.h> #include <mongoc/mongoc.h> int main (int argc, char *argv[]) {
int exit_code = EXIT_FAILURE;
mongoc_client_t *client = NULL;
const char *uri_string = "mongodb://127.0.0.1/?appname=session-example";
mongoc_uri_t *uri = NULL;
mongoc_client_session_t *client_session = NULL;
mongoc_collection_t *collection = NULL;
bson_error_t error;
bson_t *selector = NULL;
bson_t *update = NULL;
bson_t *update_opts = NULL;
bson_t *find_opts = NULL;
mongoc_read_prefs_t *secondary = NULL;
mongoc_cursor_t *cursor = NULL;
const bson_t *doc;
char *str;
bool r;
mongoc_init ();
if (argc > 1) {
uri_string = argv[1];
}
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
goto done;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
goto done;
}
mongoc_client_set_error_api (client, 2);
/* pass NULL for options - by default the session is causally consistent */
client_session = mongoc_client_start_session (client, NULL, &error);
if (!client_session) {
fprintf (stderr, "Failed to start session: %s\n", error.message);
goto done;
}
collection = mongoc_client_get_collection (client, "test", "collection");
selector = BCON_NEW ("_id", BCON_INT32 (1));
update = BCON_NEW ("$inc", "{", "x", BCON_INT32 (1), "}");
update_opts = bson_new ();
if (!mongoc_client_session_append (client_session, update_opts, &error)) {
fprintf (stderr, "Could not add session to opts: %s\n", error.message);
goto done;
}
r = mongoc_collection_update_one (
collection, selector, update, update_opts, NULL /* reply */, &error);
if (!r) {
fprintf (stderr, "Update failed: %s\n", error.message);
goto done;
}
bson_destroy (selector);
selector = BCON_NEW ("_id", BCON_INT32 (1));
secondary = mongoc_read_prefs_new (MONGOC_READ_SECONDARY);
find_opts = BCON_NEW ("maxTimeMS", BCON_INT32 (2000));
if (!mongoc_client_session_append (client_session, find_opts, &error)) {
fprintf (stderr, "Could not add session to opts: %s\n", error.message);
goto done;
};
/* read from secondary. since we're in a causally consistent session, the
* data is guaranteed to reflect the update we did on the primary. the query
* blocks waiting for the secondary to catch up, if necessary, or times out
* and fails after 2000 ms.
*/
cursor = mongoc_collection_find_with_opts (
collection, selector, find_opts, secondary);
while (mongoc_cursor_next (cursor, &doc)) {
str = bson_as_json (doc, NULL);
fprintf (stdout, "%s\n", str);
bson_free (str);
}
if (mongoc_cursor_error (cursor, &error)) {
fprintf (stderr, "Cursor Failure: %s\n", error.message);
goto done;
}
exit_code = EXIT_SUCCESS; done:
if (find_opts) {
bson_destroy (find_opts);
}
if (update) {
bson_destroy (update);
}
if (selector) {
bson_destroy (selector);
}
if (update_opts) {
bson_destroy (update_opts);
}
if (secondary) {
mongoc_read_prefs_destroy (secondary);
}
/* destroy cursor, collection, session before the client they came from */
if (cursor) {
mongoc_cursor_destroy (cursor);
}
if (collection) {
mongoc_collection_destroy (collection);
}
if (client_session) {
mongoc_client_session_destroy (client_session);
}
if (uri) {
mongoc_uri_destroy (uri);
}
if (client) {
mongoc_client_destroy (client);
}
mongoc_cleanup ();
return exit_code; }


mongoc_client_session_with_transaction_cb_t

Synopsis

typedef bool (*mongoc_client_session_with_transaction_cb_t) (

mongoc_client_session_t *session,
void *ctx,
bson_t **reply,
bson_error_t *error);


Provide this callback to mongoc_client_session_with_transaction(). The callback should run a sequence of operations meant to be contained within a transaction. The callback should not attempt to start or commit transactions.

Parameters

  • session: A mongoc_client_session_t.
  • ctx: A void* set to the the user-provided ctx passed to mongoc_client_session_with_transaction().
  • reply: An optional location for a bson_t or NULL. The callback should set this if it runs any operations against the server and receives replies.
  • error: A bson_error_t. The callback should set this if it receives any errors while running operations against the server.

Return

Returns true for success and false on failure. If cb returns false then it should also set error.

SEE ALSO:

mongoc_client_session_with_transaction()



mongoc_client_t

A single-threaded MongoDB connection. See Connection Pooling.

Synopsis

typedef struct _mongoc_client_t mongoc_client_t;
typedef mongoc_stream_t *(*mongoc_stream_initiator_t) (

const mongoc_uri_t *uri,
const mongoc_host_list_t *host,
void *user_data,
bson_error_t *error);


mongoc_client_t is an opaque type that provides access to a MongoDB server, replica set, or sharded cluster. It maintains management of underlying sockets and routing to individual nodes based on mongoc_read_prefs_t or mongoc_write_concern_t.

Streams

The underlying transport for a given client can be customized, wrapped or replaced by any implementation that fulfills mongoc_stream_t. A custom transport can be set with mongoc_client_set_stream_initiator().

Thread Safety

mongoc_client_t is NOT thread-safe and should only be used from one thread at a time. When used in multi-threaded scenarios, it is recommended that you use the thread-safe mongoc_client_pool_t to retrieve a mongoc_client_t for your thread.

Example

example-client.c

/* gcc example-client.c -o example-client $(pkg-config --cflags --libs

* libmongoc-1.0) */ /* ./example-client [CONNECTION_STRING [COLLECTION_NAME]] */ #include <mongoc/mongoc.h> #include <stdio.h> #include <stdlib.h> int main (int argc, char *argv[]) {
mongoc_client_t *client;
mongoc_collection_t *collection;
mongoc_cursor_t *cursor;
bson_error_t error;
const bson_t *doc;
const char *collection_name = "test";
bson_t query;
char *str;
const char *uri_string = "mongodb://127.0.0.1/?appname=client-example";
mongoc_uri_t *uri;
mongoc_init ();
if (argc > 1) {
uri_string = argv[1];
}
if (argc > 2) {
collection_name = argv[2];
}
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
mongoc_client_set_error_api (client, 2);
bson_init (&query);
collection = mongoc_client_get_collection (client, "test", collection_name);
cursor = mongoc_collection_find_with_opts (
collection,
&query,
NULL, /* additional options */
NULL); /* read prefs, NULL for default */
while (mongoc_cursor_next (cursor, &doc)) {
str = bson_as_canonical_extended_json (doc, NULL);
fprintf (stdout, "%s\n", str);
bson_free (str);
}
if (mongoc_cursor_error (cursor, &error)) {
fprintf (stderr, "Cursor Failure: %s\n", error.message);
return EXIT_FAILURE;
}
bson_destroy (&query);
mongoc_cursor_destroy (cursor);
mongoc_collection_destroy (collection);
mongoc_uri_destroy (uri);
mongoc_client_destroy (client);
mongoc_cleanup ();
return EXIT_SUCCESS; }


mongoc_collection_t

Synopsis

typedef struct _mongoc_collection_t mongoc_collection_t;


mongoc_collection_t provides access to a MongoDB collection. This handle is useful for actions for most CRUD operations, I.e. insert, update, delete, find, etc.

Read Preferences and Write Concerns

Read preferences and write concerns are inherited from the parent client. They can be overridden by set_* commands if so desired.

mongoc_cursor_t

Client-side cursor abstraction

Synopsis

typedef struct _mongoc_cursor_t mongoc_cursor_t;


mongoc_cursor_t provides access to a MongoDB query cursor. It wraps up the wire protocol negotiation required to initiate a query and retrieve an unknown number of documents.

Common cursor operations include:

  • Determine which host we've connected to with mongoc_cursor_get_host().
  • Retrieve more records with repeated calls to mongoc_cursor_next().
  • Clone a query to repeat execution at a later point with mongoc_cursor_clone().
  • Test for errors with mongoc_cursor_error().

Cursors are lazy, meaning that no connection is established and no network traffic occurs until the first call to mongoc_cursor_next().

Thread Safety

mongoc_cursor_t is NOT thread safe. It may only be used from within the thread in which it was created.

Example

Query MongoDB and iterate results

/* gcc example-client.c -o example-client $(pkg-config --cflags --libs

* libmongoc-1.0) */ /* ./example-client [CONNECTION_STRING [COLLECTION_NAME]] */ #include <mongoc/mongoc.h> #include <stdio.h> #include <stdlib.h> int main (int argc, char *argv[]) {
mongoc_client_t *client;
mongoc_collection_t *collection;
mongoc_cursor_t *cursor;
bson_error_t error;
const bson_t *doc;
const char *collection_name = "test";
bson_t query;
char *str;
const char *uri_string = "mongodb://127.0.0.1/?appname=client-example";
mongoc_uri_t *uri;
mongoc_init ();
if (argc > 1) {
uri_string = argv[1];
}
if (argc > 2) {
collection_name = argv[2];
}
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
mongoc_client_set_error_api (client, 2);
bson_init (&query);
collection = mongoc_client_get_collection (client, "test", collection_name);
cursor = mongoc_collection_find_with_opts (
collection,
&query,
NULL, /* additional options */
NULL); /* read prefs, NULL for default */
while (mongoc_cursor_next (cursor, &doc)) {
str = bson_as_canonical_extended_json (doc, NULL);
fprintf (stdout, "%s\n", str);
bson_free (str);
}
if (mongoc_cursor_error (cursor, &error)) {
fprintf (stderr, "Cursor Failure: %s\n", error.message);
return EXIT_FAILURE;
}
bson_destroy (&query);
mongoc_cursor_destroy (cursor);
mongoc_collection_destroy (collection);
mongoc_uri_destroy (uri);
mongoc_client_destroy (client);
mongoc_cleanup ();
return EXIT_SUCCESS; }


mongoc_database_t

MongoDB Database Abstraction

Synopsis

typedef struct _mongoc_database_t mongoc_database_t;


mongoc_database_t provides access to a MongoDB database. This handle is useful for actions a particular database object. It is not a container for mongoc_collection_t structures.

Read preferences and write concerns are inherited from the parent client. They can be overridden with mongoc_database_set_read_prefs() and mongoc_database_set_write_concern().

Examples

#include <mongoc/mongoc.h>
int
main (int argc, char *argv[])
{

mongoc_database_t *database;
mongoc_client_t *client;
mongoc_init ();
client = mongoc_client_new ("mongodb://localhost/");
database = mongoc_client_get_database (client, "test");
mongoc_database_destroy (database);
mongoc_client_destroy (client);
mongoc_cleanup ();
return 0; }


mongoc_delete_flags_t

Flags for deletion operations

Synopsis

typedef enum {

MONGOC_DELETE_NONE = 0,
MONGOC_DELETE_SINGLE_REMOVE = 1 << 0, } mongoc_delete_flags_t;


Deprecated

WARNING:

These flags are deprecated and should not be used in new code.


Please use mongoc_collection_delete_one() or mongoc_collection_delete_many() instead.

mongoc_find_and_modify_opts_t

find_and_modify abstraction

Synopsis

mongoc_find_and_modify_opts_t is a builder interface to construct a find_and_modify command.

It was created to be able to accommodate new arguments to the MongoDB find_and_modify command.

As of MongoDB 3.2, the mongoc_write_concern_t specified on the mongoc_collection_t will be used, if any.

Example

flags.c

void
fam_flags (mongoc_collection_t *collection)
{

mongoc_find_and_modify_opts_t *opts;
bson_t reply;
bson_error_t error;
bson_t query = BSON_INITIALIZER;
bson_t *update;
bool success;
/* Find Zlatan Ibrahimovic, the striker */
BSON_APPEND_UTF8 (&query, "firstname", "Zlatan");
BSON_APPEND_UTF8 (&query, "lastname", "Ibrahimovic");
BSON_APPEND_UTF8 (&query, "profession", "Football player");
BSON_APPEND_INT32 (&query, "age", 34);
BSON_APPEND_INT32 (
&query, "goals", (16 + 35 + 23 + 57 + 16 + 14 + 28 + 84) + (1 + 6 + 62));
/* Add his football position */
update = BCON_NEW ("$set", "{", "position", BCON_UTF8 ("striker"), "}");
opts = mongoc_find_and_modify_opts_new ();
mongoc_find_and_modify_opts_set_update (opts, update);
/* Create the document if it didn't exist, and return the updated document */
mongoc_find_and_modify_opts_set_flags (
opts, MONGOC_FIND_AND_MODIFY_UPSERT | MONGOC_FIND_AND_MODIFY_RETURN_NEW);
success = mongoc_collection_find_and_modify_with_opts (
collection, &query, opts, &reply, &error);
if (success) {
char *str;
str = bson_as_canonical_extended_json (&reply, NULL);
printf ("%s\n", str);
bson_free (str);
} else {
fprintf (
stderr, "Got error: \"%s\" on line %d\n", error.message, __LINE__);
}
bson_destroy (&reply);
bson_destroy (update);
bson_destroy (&query);
mongoc_find_and_modify_opts_destroy (opts); }


bypass.c

void
fam_bypass (mongoc_collection_t *collection)
{

mongoc_find_and_modify_opts_t *opts;
bson_t reply;
bson_t *update;
bson_error_t error;
bson_t query = BSON_INITIALIZER;
bool success;
/* Find Zlatan Ibrahimovic, the striker */
BSON_APPEND_UTF8 (&query, "firstname", "Zlatan");
BSON_APPEND_UTF8 (&query, "lastname", "Ibrahimovic");
BSON_APPEND_UTF8 (&query, "profession", "Football player");
/* Bump his age */
update = BCON_NEW ("$inc", "{", "age", BCON_INT32 (1), "}");
opts = mongoc_find_and_modify_opts_new ();
mongoc_find_and_modify_opts_set_update (opts, update);
/* He can still play, even though he is pretty old. */
mongoc_find_and_modify_opts_set_bypass_document_validation (opts, true);
success = mongoc_collection_find_and_modify_with_opts (
collection, &query, opts, &reply, &error);
if (success) {
char *str;
str = bson_as_canonical_extended_json (&reply, NULL);
printf ("%s\n", str);
bson_free (str);
} else {
fprintf (
stderr, "Got error: \"%s\" on line %d\n", error.message, __LINE__);
}
bson_destroy (&reply);
bson_destroy (update);
bson_destroy (&query);
mongoc_find_and_modify_opts_destroy (opts); }


update.c

void
fam_update (mongoc_collection_t *collection)
{

mongoc_find_and_modify_opts_t *opts;
bson_t *update;
bson_t reply;
bson_error_t error;
bson_t query = BSON_INITIALIZER;
bool success;
/* Find Zlatan Ibrahimovic */
BSON_APPEND_UTF8 (&query, "firstname", "Zlatan");
BSON_APPEND_UTF8 (&query, "lastname", "Ibrahimovic");
/* Make him a book author */
update = BCON_NEW ("$set", "{", "author", BCON_BOOL (true), "}");
opts = mongoc_find_and_modify_opts_new ();
/* Note that the document returned is the _previous_ version of the document
* To fetch the modified new version, use
* mongoc_find_and_modify_opts_set_flags (opts,
* MONGOC_FIND_AND_MODIFY_RETURN_NEW);
*/
mongoc_find_and_modify_opts_set_update (opts, update);
success = mongoc_collection_find_and_modify_with_opts (
collection, &query, opts, &reply, &error);
if (success) {
char *str;
str = bson_as_canonical_extended_json (&reply, NULL);
printf ("%s\n", str);
bson_free (str);
} else {
fprintf (
stderr, "Got error: \"%s\" on line %d\n", error.message, __LINE__);
}
bson_destroy (&reply);
bson_destroy (update);
bson_destroy (&query);
mongoc_find_and_modify_opts_destroy (opts); }


fields.c

void
fam_fields (mongoc_collection_t *collection)
{

mongoc_find_and_modify_opts_t *opts;
bson_t fields = BSON_INITIALIZER;
bson_t *update;
bson_t reply;
bson_error_t error;
bson_t query = BSON_INITIALIZER;
bool success;
/* Find Zlatan Ibrahimovic */
BSON_APPEND_UTF8 (&query, "lastname", "Ibrahimovic");
BSON_APPEND_UTF8 (&query, "firstname", "Zlatan");
/* Return his goal tally */
BSON_APPEND_INT32 (&fields, "goals", 1);
/* Bump his goal tally */
update = BCON_NEW ("$inc", "{", "goals", BCON_INT32 (1), "}");
opts = mongoc_find_and_modify_opts_new ();
mongoc_find_and_modify_opts_set_update (opts, update);
mongoc_find_and_modify_opts_set_fields (opts, &fields);
/* Return the new tally */
mongoc_find_and_modify_opts_set_flags (opts,
MONGOC_FIND_AND_MODIFY_RETURN_NEW);
success = mongoc_collection_find_and_modify_with_opts (
collection, &query, opts, &reply, &error);
if (success) {
char *str;
str = bson_as_canonical_extended_json (&reply, NULL);
printf ("%s\n", str);
bson_free (str);
} else {
fprintf (
stderr, "Got error: \"%s\" on line %d\n", error.message, __LINE__);
}
bson_destroy (&reply);
bson_destroy (update);
bson_destroy (&fields);
bson_destroy (&query);
mongoc_find_and_modify_opts_destroy (opts); }


sort.c

void
fam_sort (mongoc_collection_t *collection)
{

mongoc_find_and_modify_opts_t *opts;
bson_t *update;
bson_t sort = BSON_INITIALIZER;
bson_t reply;
bson_error_t error;
bson_t query = BSON_INITIALIZER;
bool success;
/* Find all users with the lastname Ibrahimovic */
BSON_APPEND_UTF8 (&query, "lastname", "Ibrahimovic");
/* Sort by age (descending) */
BSON_APPEND_INT32 (&sort, "age", -1);
/* Bump his goal tally */
update = BCON_NEW ("$set", "{", "oldest", BCON_BOOL (true), "}");
opts = mongoc_find_and_modify_opts_new ();
mongoc_find_and_modify_opts_set_update (opts, update);
mongoc_find_and_modify_opts_set_sort (opts, &sort);
success = mongoc_collection_find_and_modify_with_opts (
collection, &query, opts, &reply, &error);
if (success) {
char *str;
str = bson_as_canonical_extended_json (&reply, NULL);
printf ("%s\n", str);
bson_free (str);
} else {
fprintf (
stderr, "Got error: \"%s\" on line %d\n", error.message, __LINE__);
}
bson_destroy (&reply);
bson_destroy (update);
bson_destroy (&sort);
bson_destroy (&query);
mongoc_find_and_modify_opts_destroy (opts); }


opts.c

void
fam_opts (mongoc_collection_t *collection)
{

mongoc_find_and_modify_opts_t *opts;
bson_t reply;
bson_t *update;
bson_error_t error;
bson_t query = BSON_INITIALIZER;
mongoc_write_concern_t *wc;
bson_t extra = BSON_INITIALIZER;
bool success;
/* Find Zlatan Ibrahimovic, the striker */
BSON_APPEND_UTF8 (&query, "firstname", "Zlatan");
BSON_APPEND_UTF8 (&query, "lastname", "Ibrahimovic");
BSON_APPEND_UTF8 (&query, "profession", "Football player");
/* Bump his age */
update = BCON_NEW ("$inc", "{", "age", BCON_INT32 (1), "}");
opts = mongoc_find_and_modify_opts_new ();
mongoc_find_and_modify_opts_set_update (opts, update);
/* Abort if the operation takes too long. */
mongoc_find_and_modify_opts_set_max_time_ms (opts, 100);
/* Set write concern w: 2 */
wc = mongoc_write_concern_new ();
mongoc_write_concern_set_w (wc, 2);
mongoc_write_concern_append (wc, &extra);
/* Some future findAndModify option the driver doesn't support conveniently
*/
BSON_APPEND_INT32 (&extra, "futureOption", 42);
mongoc_find_and_modify_opts_append (opts, &extra);
success = mongoc_collection_find_and_modify_with_opts (
collection, &query, opts, &reply, &error);
if (success) {
char *str;
str = bson_as_canonical_extended_json (&reply, NULL);
printf ("%s\n", str);
bson_free (str);
} else {
fprintf (
stderr, "Got error: \"%s\" on line %d\n", error.message, __LINE__);
}
bson_destroy (&reply);
bson_destroy (&extra);
bson_destroy (update);
bson_destroy (&query);
mongoc_write_concern_destroy (wc);
mongoc_find_and_modify_opts_destroy (opts); }


fam.c

int
main (void)
{

mongoc_collection_t *collection;
mongoc_database_t *database;
mongoc_client_t *client;
const char *uri_string =
"mongodb://localhost:27017/admin?appname=find-and-modify-opts-example";
mongoc_uri_t *uri;
bson_error_t error;
bson_t *options;
mongoc_init ();
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
mongoc_client_set_error_api (client, 2);
database = mongoc_client_get_database (client, "databaseName");
options = BCON_NEW ("validator",
"{",
"age",
"{",
"$lte",
BCON_INT32 (34),
"}",
"}",
"validationAction",
BCON_UTF8 ("error"),
"validationLevel",
BCON_UTF8 ("moderate"));
collection = mongoc_database_create_collection (
database, "collectionName", options, &error);
if (!collection) {
fprintf (
stderr, "Got error: \"%s\" on line %d\n", error.message, __LINE__);
return EXIT_FAILURE;
}
fam_flags (collection);
fam_bypass (collection);
fam_update (collection);
fam_fields (collection);
fam_opts (collection);
fam_sort (collection);
mongoc_collection_drop (collection, NULL);
bson_destroy (options);
mongoc_uri_destroy (uri);
mongoc_database_destroy (database);
mongoc_collection_destroy (collection);
mongoc_client_destroy (client);
mongoc_cleanup ();
return EXIT_SUCCESS; }


Outputs:

{

"lastErrorObject": {
"updatedExisting": false,
"n": 1,
"upserted": {
"$oid": "56562a99d13e6d86239c7b00"
}
},
"value": {
"_id": {
"$oid": "56562a99d13e6d86239c7b00"
},
"age": 34,
"firstname": "Zlatan",
"goals": 342,
"lastname": "Ibrahimovic",
"profession": "Football player",
"position": "striker"
},
"ok": 1 } {
"lastErrorObject": {
"updatedExisting": true,
"n": 1
},
"value": {
"_id": {
"$oid": "56562a99d13e6d86239c7b00"
},
"age": 34,
"firstname": "Zlatan",
"goals": 342,
"lastname": "Ibrahimovic",
"profession": "Football player",
"position": "striker"
},
"ok": 1 } {
"lastErrorObject": {
"updatedExisting": true,
"n": 1
},
"value": {
"_id": {
"$oid": "56562a99d13e6d86239c7b00"
},
"age": 35,
"firstname": "Zlatan",
"goals": 342,
"lastname": "Ibrahimovic",
"profession": "Football player",
"position": "striker"
},
"ok": 1 } {
"lastErrorObject": {
"updatedExisting": true,
"n": 1
},
"value": {
"_id": {
"$oid": "56562a99d13e6d86239c7b00"
},
"goals": 343
},
"ok": 1 } {
"lastErrorObject": {
"updatedExisting": true,
"n": 1
},
"value": {
"_id": {
"$oid": "56562a99d13e6d86239c7b00"
},
"age": 35,
"firstname": "Zlatan",
"goals": 343,
"lastname": "Ibrahimovic",
"profession": "Football player",
"position": "striker",
"author": true
},
"ok": 1 }


mongoc_gridfs_file_list_t

Synopsis

#include <mongoc/mongoc.h>
typedef struct _mongoc_gridfs_file_list_t mongoc_gridfs_file_list_t;


Description

mongoc_gridfs_file_list_t provides a gridfs file list abstraction. It provides iteration and basic marshalling on top of a regular mongoc_collection_find_with_opts() style query. In interface, it's styled after mongoc_cursor_t.

Example

mongoc_gridfs_file_list_t *list;
mongoc_gridfs_file_t *file;
list = mongoc_gridfs_find (gridfs, query);
while ((file = mongoc_gridfs_file_list_next (list))) {

do_something (file);
mongoc_gridfs_file_destroy (file); } mongoc_gridfs_file_list_destroy (list);


mongoc_gridfs_file_opt_t

Synopsis

typedef struct {

const char *md5;
const char *filename;
const char *content_type;
const bson_t *aliases;
const bson_t *metadata;
uint32_t chunk_size; } mongoc_gridfs_file_opt_t;


Description

This structure contains options that can be set on a mongoc_gridfs_file_t. It can be used by various functions when creating a new gridfs file.

mongoc_gridfs_file_t

Synopsis

typedef struct _mongoc_gridfs_file_t mongoc_gridfs_file_t;


Description

This structure provides a MongoDB GridFS file abstraction. It provides several APIs.

  • readv, writev, seek, and tell.
  • General file metadata such as filename and length.
  • GridFS metadata such as md5, filename, content_type, aliases, metadata, chunk_size, and upload_date.

Thread Safety

This structure is NOT thread-safe and should only be used from one thread at a time.

  • mongoc_client_t
  • mongoc_gridfs_t
  • mongoc_gridfs_file_list_t
  • mongoc_gridfs_file_opt_t

mongoc_gridfs_bucket_t

Synopsis

#include <mongoc/mongoc.h>
typedef struct _mongoc_gridfs_bucket_t mongoc_gridfs_bucket_t;


Description

mongoc_gridfs_bucket_t provides a spec-compliant MongoDB GridFS implementation, superseding mongoc_gridfs_t. See the GridFS MongoDB documentation.

Thread Safety

mongoc_gridfs_bucket_t is NOT thread-safe and should only be used in the same thread as the owning mongoc_client_t.

Lifecycle

It is an error to free a mongoc_gridfs_bucket_t before freeing all derived instances of mongoc_stream_t. The owning mongoc_client_t must outlive the mongoc_gridfs_bucket_t.

Example

example-gridfs-bucket.c

#include <stdio.h>
#include <stdlib.h>
#include <mongoc/mongoc.h>
int
main (int argc, char *argv[])
{

const char *uri_string =
"mongodb://localhost:27017/?appname=new-gridfs-example";
mongoc_client_t *client;
mongoc_database_t *db;
mongoc_stream_t *file_stream;
mongoc_gridfs_bucket_t *bucket;
mongoc_cursor_t *cursor;
bson_t filter;
bool res;
bson_value_t file_id;
bson_error_t error;
const bson_t *doc;
char *str;
mongoc_init ();
if (argc != 3) {
fprintf (stderr, "usage: %s SOURCE_FILE_PATH FILE_COPY_PATH\n", argv[0]);
return EXIT_FAILURE;
}
/* 1. Make a bucket. */
client = mongoc_client_new (uri_string);
db = mongoc_client_get_database (client, "test");
bucket = mongoc_gridfs_bucket_new (db, NULL, NULL, &error);
if (!bucket) {
printf ("Error creating gridfs bucket: %s\n", error.message);
return EXIT_FAILURE;
}
/* 2. Insert a file. */
file_stream = mongoc_stream_file_new_for_path (argv[1], O_RDONLY, 0);
res = mongoc_gridfs_bucket_upload_from_stream (
bucket, "my-file", file_stream, NULL, &file_id, &error);
if (!res) {
printf ("Error uploading file: %s\n", error.message);
return EXIT_FAILURE;
}
mongoc_stream_close (file_stream);
mongoc_stream_destroy (file_stream);
/* 3. Download the file in GridFS to a local file. */
file_stream = mongoc_stream_file_new_for_path (argv[2], O_CREAT | O_RDWR, 0);
if (!file_stream) {
perror ("Error opening file stream");
return EXIT_FAILURE;
}
res = mongoc_gridfs_bucket_download_to_stream (
bucket, &file_id, file_stream, &error);
if (!res) {
printf ("Error downloading file to stream: %s\n", error.message);
return EXIT_FAILURE;
}
mongoc_stream_close (file_stream);
mongoc_stream_destroy (file_stream);
/* 4. List what files are available in GridFS. */
bson_init (&filter);
cursor = mongoc_gridfs_bucket_find (bucket, &filter, NULL);
while (mongoc_cursor_next (cursor, &doc)) {
str = bson_as_canonical_extended_json (doc, NULL);
printf ("%s\n", str);
bson_free (str);
}
/* 5. Delete the file that we added. */
res = mongoc_gridfs_bucket_delete_by_id (bucket, &file_id, &error);
if (!res) {
printf ("Error deleting the file: %s\n", error.message);
return EXIT_FAILURE;
}
/* 6. Cleanup. */
mongoc_stream_close (file_stream);
mongoc_stream_destroy (file_stream);
mongoc_cursor_destroy (cursor);
bson_destroy (&filter);
mongoc_gridfs_bucket_destroy (bucket);
mongoc_database_destroy (db);
mongoc_client_destroy (client);
mongoc_cleanup ();
return EXIT_SUCCESS; }


SEE ALSO:

The MongoDB GridFS specification.

The non spec-compliant mongoc_gridfs_t.



mongoc_gridfs_t

WARNING:

This GridFS implementation does not conform to the MongoDB GridFS specification. For a spec compliant implementation, use mongoc_gridfs_bucket_t.


Synopsis

#include <mongoc/mongoc.h>
typedef struct _mongoc_gridfs_t mongoc_gridfs_t;


Description

mongoc_gridfs_t provides a MongoDB gridfs implementation. The system as a whole is made up of gridfs objects, which contain gridfs_files and gridfs_file_lists. Essentially, a basic file system API.

There are extensive caveats about the kind of use cases gridfs is practical for. In particular, any writing after initial file creation is likely to both break any concurrent readers and be quite expensive. That said, this implementation does allow for arbitrary writes to existing gridfs object, just use them with caution.

mongoc_gridfs also integrates tightly with the mongoc_stream_t abstraction, which provides some convenient wrapping for file creation and reading/writing. It can be used without, but its worth looking to see if your problem can fit that model.

WARNING:

mongoc_gridfs_t does not support read preferences. In a replica set, GridFS queries are always routed to the primary.


Thread Safety

mongoc_gridfs_t is NOT thread-safe and should only be used in the same thread as the owning mongoc_client_t.

Lifecycle

It is an error to free a mongoc_gridfs_t before freeing all related instances of mongoc_gridfs_file_t and mongoc_gridfs_file_list_t.

Example

example-gridfs.c

#include <assert.h>
#include <mongoc/mongoc.h>
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
int
main (int argc, char *argv[])
{

mongoc_gridfs_t *gridfs;
mongoc_gridfs_file_t *file;
mongoc_gridfs_file_list_t *list;
mongoc_gridfs_file_opt_t opt = {0};
mongoc_client_t *client;
const char *uri_string = "mongodb://127.0.0.1:27017/?appname=gridfs-example";
mongoc_uri_t *uri;
mongoc_stream_t *stream;
bson_t filter;
bson_t opts;
bson_t child;
bson_error_t error;
ssize_t r;
char buf[4096];
mongoc_iovec_t iov;
const char *filename;
const char *command;
bson_value_t id;
if (argc < 2) {
fprintf (stderr, "usage - %s command ...\n", argv[0]);
return EXIT_FAILURE;
}
mongoc_init ();
iov.iov_base = (void *) buf;
iov.iov_len = sizeof buf;
/* connect to localhost client */
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
assert (client);
mongoc_client_set_error_api (client, 2);
/* grab a gridfs handle in test prefixed by fs */
gridfs = mongoc_client_get_gridfs (client, "test", "fs", &error);
assert (gridfs);
command = argv[1];
filename = argv[2];
if (strcmp (command, "read") == 0) {
if (argc != 3) {
fprintf (stderr, "usage - %s read filename\n", argv[0]);
return EXIT_FAILURE;
}
file = mongoc_gridfs_find_one_by_filename (gridfs, filename, &error);
assert (file);
stream = mongoc_stream_gridfs_new (file);
assert (stream);
for (;;) {
r = mongoc_stream_readv (stream, &iov, 1, -1, 0);
assert (r >= 0);
if (r == 0) {
break;
}
if (fwrite (iov.iov_base, 1, r, stdout) != r) {
MONGOC_ERROR ("Failed to write to stdout. Exiting.\n");
exit (1);
}
}
mongoc_stream_destroy (stream);
mongoc_gridfs_file_destroy (file);
} else if (strcmp (command, "list") == 0) {
bson_init (&filter);
bson_init (&opts);
bson_append_document_begin (&opts, "sort", -1, &child);
BSON_APPEND_INT32 (&child, "filename", 1);
bson_append_document_end (&opts, &child);
list = mongoc_gridfs_find_with_opts (gridfs, &filter, &opts);
bson_destroy (&filter);
bson_destroy (&opts);
while ((file = mongoc_gridfs_file_list_next (list))) {
const char *name = mongoc_gridfs_file_get_filename (file);
printf ("%s\n", name ? name : "?");
mongoc_gridfs_file_destroy (file);
}
mongoc_gridfs_file_list_destroy (list);
} else if (strcmp (command, "write") == 0) {
if (argc != 4) {
fprintf (stderr, "usage - %s write filename input_file\n", argv[0]);
return EXIT_FAILURE;
}
stream = mongoc_stream_file_new_for_path (argv[3], O_RDONLY, 0);
assert (stream);
opt.filename = filename;
/* the driver generates a file_id for you */
file = mongoc_gridfs_create_file_from_stream (gridfs, stream, &opt);
assert (file);
id.value_type = BSON_TYPE_INT32;
id.value.v_int32 = 1;
/* optional: the following method specifies a file_id of any
BSON type */
if (!mongoc_gridfs_file_set_id (file, &id, &error)) {
fprintf (stderr, "%s\n", error.message);
return EXIT_FAILURE;
}
if (!mongoc_gridfs_file_save (file)) {
mongoc_gridfs_file_error (file, &error);
fprintf (stderr, "Could not save: %s\n", error.message);
return EXIT_FAILURE;
}
mongoc_gridfs_file_destroy (file);
} else {
fprintf (stderr, "Unknown command");
return EXIT_FAILURE;
}
mongoc_gridfs_destroy (gridfs);
mongoc_uri_destroy (uri);
mongoc_client_destroy (client);
mongoc_cleanup ();
return EXIT_SUCCESS; }


SEE ALSO:

The MongoDB GridFS specification.

The spec-compliant mongoc_gridfs_bucket_t.



mongoc_host_list_t

Synopsis

typedef struct {

mongoc_host_list_t *next;
char host[BSON_HOST_NAME_MAX + 1];
char host_and_port[BSON_HOST_NAME_MAX + 7];
uint16_t port;
int family;
void *padding[4]; } mongoc_host_list_t;


Description

The host and port of a MongoDB server. Can be part of a linked list: for example the return value of mongoc_uri_get_hosts() when multiple hosts are provided in the MongoDB URI.

SEE ALSO:

mongoc_uri_get_hosts() and mongoc_cursor_get_host().



mongoc_index_opt_geo_t

Synopsis

#include <mongoc/mongoc.h>
typedef struct {

uint8_t twod_sphere_version;
uint8_t twod_bits_precision;
double twod_location_min;
double twod_location_max;
double haystack_bucket_size;
uint8_t *padding[32]; } mongoc_index_opt_geo_t;


Description

This structure contains the options that may be used for tuning a GEO index.

SEE ALSO:

mongoc_index_opt_t

mongoc_index_opt_wt_t



mongoc_index_opt_t

Synopsis

#include <mongoc/mongoc.h>
typedef struct {

bool is_initialized;
bool background;
bool unique;
const char *name;
bool drop_dups;
bool sparse;
int32_t expire_after_seconds;
int32_t v;
const bson_t *weights;
const char *default_language;
const char *language_override;
mongoc_index_opt_geo_t *geo_options;
mongoc_index_opt_storage_t *storage_options;
const bson_t *partial_filter_expression;
const bson_t *collation;
void *padding[4]; } mongoc_index_opt_t;


Deprecated

This structure is deprecated and should not be used in new code. See Creating Indexes.

Description

This structure contains the options that may be used for tuning a specific index.

See the createIndexes documentations in the MongoDB manual for descriptions of individual options.

NOTE:

dropDups is deprecated as of MongoDB version 3.0.0. This option is silently ignored by the server and unique index builds using this option will fail if a duplicate value is detected.


Example

{

bson_t keys;
bson_error_t error;
mongoc_index_opt_t opt;
mongoc_index_opt_geo_t geo_opt;
mongoc_index_opt_init (&opt);
mongoc_index_opt_geo_init (&geo_opt);
bson_init (&keys);
BSON_APPEND_UTF8 (&keys, "location", "2d");
geo_opt.twod_location_min = -123;
geo_opt.twod_location_max = +123;
geo_opt.twod_bits_precision = 30;
opt.geo_options = &geo_opt;
collection = mongoc_client_get_collection (client, "test", "geo_test");
if (mongoc_collection_create_index (collection, &keys, &opt, &error)) {
/* Successfully created the geo index */
}
bson_destroy (&keys);
mongoc_collection_destroy (&collection); }


SEE ALSO:

mongoc_index_opt_geo_t

mongoc_index_opt_wt_t



mongoc_index_opt_wt_t

Synopsis

#include <mongoc/mongoc.h>
typedef struct {

mongoc_index_opt_storage_t base;
const char *config_str;
void *padding[8]; } mongoc_index_opt_wt_t;


Description

This structure contains the options that may be used for tuning a WiredTiger specific index.

SEE ALSO:

mongoc_index_opt_t

mongoc_index_opt_geo_t



mongoc_insert_flags_t

Flags for insert operations

Synopsis

typedef enum {

MONGOC_INSERT_NONE = 0,
MONGOC_INSERT_CONTINUE_ON_ERROR = 1 << 0, } mongoc_insert_flags_t; #define MONGOC_INSERT_NO_VALIDATE (1U << 31)


Description

These flags correspond to the MongoDB wire protocol. They may be bitwise or'd together. They may modify how an insert happens on the MongoDB server.

Flag Values

MONGOC_INSERT_NONE Specify no insert flags.
MONGOC_INSERT_CONTINUE_ON_ERROR Continue inserting documents from the insertion set even if one insert fails.
MONGOC_INSERT_NO_VALIDATE Do not validate insertion documents before performing an insert. Validation can be expensive, so this can save some time if you know your documents are already valid.

mongoc_iovec_t

Synopsis

Synopsis

#include <mongoc/mongoc.h>
#ifdef _WIN32
typedef struct {

u_long iov_len;
char *iov_base; } mongoc_iovec_t; #else typedef struct iovec mongoc_iovec_t; #endif


The mongoc_iovec_t structure is a portability abstraction for consumers of the mongoc_stream_t interfaces. It allows for scatter/gather I/O through the socket subsystem.

WARNING:

When writing portable code, beware of the ordering of iov_len and iov_base as they are different on various platforms. Therefore, you should not use C initializers for initialization.


mongoc_matcher_t

Client-side document matching abstraction

Synopsis

typedef struct _mongoc_matcher_t mongoc_matcher_t;


mongoc_matcher_t provides a reduced-interface for client-side matching of BSON documents.

It can perform the basics such as $in, $nin, $eq, $neq, $gt, $gte, $lt, and $lte.

WARNING:

mongoc_matcher_t does not currently support the full spectrum of query operations that the MongoDB server supports.


Deprecated

WARNING:

mongoc_matcher_t is deprecated and will be removed in version 2.0.


Example

Filter a sequence of BSON documents from STDIN based on a query

#include <bson/bson.h>
#include <mongoc/mongoc.h>
#include <stdio.h>
int
main (int argc, char *argv[])
{

mongoc_matcher_t *matcher;
bson_reader_t *reader;
const bson_t *bson;
bson_t *spec;
char *str;
int fd;
mongoc_init (); #ifdef _WIN32
fd = fileno (stdin); #else
fd = STDIN_FILENO; #endif
reader = bson_reader_new_from_fd (fd, false);
spec = BCON_NEW ("hello", "world");
matcher = mongoc_matcher_new (spec, NULL);
while ((bson = bson_reader_read (reader, NULL))) {
if (mongoc_matcher_match (matcher, bson)) {
str = bson_as_canonical_extended_json (bson, NULL);
printf ("%s\n", str);
bson_free (str);
}
}
bson_reader_destroy (reader);
bson_destroy (spec);
mongoc_cleanup ();
return 0; }


mongoc_optional_t

A struct to store optional boolean values.

Synopsis

Used to specify optional boolean flags, which may remain unset.

This is used within mongoc_server_api_t to track whether a flag was explicitly set.

mongoc_query_flags_t

Flags for query operations

Synopsis

typedef enum {

MONGOC_QUERY_NONE = 0,
MONGOC_QUERY_TAILABLE_CURSOR = 1 << 1,
MONGOC_QUERY_SECONDARY_OK = 1 << 2,
MONGOC_QUERY_OPLOG_REPLAY = 1 << 3,
MONGOC_QUERY_NO_CURSOR_TIMEOUT = 1 << 4,
MONGOC_QUERY_AWAIT_DATA = 1 << 5,
MONGOC_QUERY_EXHAUST = 1 << 6,
MONGOC_QUERY_PARTIAL = 1 << 7, } mongoc_query_flags_t;


Description

These flags correspond to the MongoDB wire protocol. They may be bitwise or'd together. They may modify how a query is performed in the MongoDB server.

Flag Values

MONGOC_QUERY_NONE Specify no query flags.
MONGOC_QUERY_TAILABLE_CURSOR Cursor will not be closed when the last data is retrieved. You can resume this cursor later.
MONGOC_QUERY_SECONDARY_OK Allow query of replica set secondaries.
MONGOC_QUERY_OPLOG_REPLAY Used internally by MongoDB.
MONGOC_QUERY_NO_CURSOR_TIMEOUT The server normally times out an idle cursor after an inactivity period (10 minutes). This prevents that.
MONGOC_QUERY_AWAIT_DATA Use with MONGOC_QUERY_TAILABLE_CURSOR. Block rather than returning no data. After a period, time out.
MONGOC_QUERY_EXHAUST Stream the data down full blast in multiple "reply" packets. Faster when you are pulling down a lot of data and you know you want to retrieve it all. Only applies to cursors created from a find operation (i.e. mongoc_collection_find()).
MONGOC_QUERY_PARTIAL Get partial results from mongos if some shards are down (instead of throwing an error).

mongoc_rand

MongoDB Random Number Generator

Synopsis

void
mongoc_rand_add (const void *buf, int num, double entropy);
void
mongoc_rand_seed (const void *buf, int num);
int
mongoc_rand_status (void);


Description

The mongoc_rand family of functions provide access to the low level randomness primitives used by the MongoDB C Driver. In particular, they control the creation of cryptographically strong pseudo-random bytes required by some security mechanisms.

While we can usually pull enough entropy from the environment, you may be required to seed the PRNG manually depending on your OS, hardware and other entropy consumers running on the same system.

Entropy

mongoc_rand_add and mongoc_rand_seed allow the user to directly provide entropy. They differ insofar as mongoc_rand_seed requires that each bit provided is fully random. mongoc_rand_add allows the user to specify the degree of randomness in the provided bytes as well.

Status

The mongoc_rand_status function allows the user to check the status of the mongoc PRNG. This can be used to guarantee sufficient entropy at program startup, rather than waiting for runtime errors to occur.

mongoc_read_concern_t

Read Concern abstraction

Synopsis

New in MongoDB 3.2.

The mongoc_read_concern_t allows clients to choose a level of isolation for their reads. The default, MONGOC_READ_CONCERN_LEVEL_LOCAL, is right for the great majority of applications.

You can specify a read concern on connection objects, database objects, or collection objects.

See readConcern on the MongoDB website for more information.

Read Concern is only sent to MongoDB when it has explicitly been set by mongoc_read_concern_set_level() to anything other than NULL.

Read Concern Levels

Macro Description First MongoDB version
MONGOC_READ_CONCERN_LEVEL_LOCAL Level "local", the default. 3.2
MONGOC_READ_CONCERN_LEVEL_MAJORITY Level "majority". 3.2
MONGOC_READ_CONCERN_LEVEL_LINEARIZABLE Level "linearizable". 3.4
MONGOC_READ_CONCERN_LEVEL_AVAILABLE Level "available". 3.6
MONGOC_READ_CONCERN_LEVEL_SNAPSHOT Level "snapshot". 4.0

For the sake of compatibility with future versions of MongoDB, mongoc_read_concern_set_level() allows any string, not just this list of known read concern levels.

See Read Concern Levels in the MongoDB manual for more information about the individual read concern levels.

mongoc_read_mode_t

Read Preference Modes

Synopsis

typedef enum {

MONGOC_READ_PRIMARY = (1 << 0),
MONGOC_READ_SECONDARY = (1 << 1),
MONGOC_READ_PRIMARY_PREFERRED = (1 << 2) | MONGOC_READ_PRIMARY,
MONGOC_READ_SECONDARY_PREFERRED = (1 << 2) | MONGOC_READ_SECONDARY,
MONGOC_READ_NEAREST = (1 << 3) | MONGOC_READ_SECONDARY, } mongoc_read_mode_t;


Description

This enum describes how reads should be dispatched. The default is MONGOC_READ_PRIMARY.

Please see the MongoDB website for a description of Read Preferences.

mongoc_read_prefs_t

A read preference abstraction

Synopsis

mongoc_read_prefs_t provides an abstraction on top of the MongoDB connection read preferences. It allows for hinting to the driver which nodes in a replica set should be accessed first and how.

You can specify a read preference mode on connection objects, database objects, collection objects, or per-operation. Generally, it makes the most sense to stick with the global default mode, MONGOC_READ_PRIMARY. All of the other modes come with caveats that won't be covered in great detail here.

Read Modes

MONGOC_READ_PRIMARY Default mode. All operations read from the current replica set primary.
MONGOC_READ_SECONDARY All operations read from among the nearest secondary members of the replica set.
MONGOC_READ_PRIMARY_PREFERRED In most situations, operations read from the primary but if it is unavailable, operations read from secondary members.
MONGOC_READ_SECONDARY_PREFERRED In most situations, operations read from among the nearest secondary members, but if no secondaries are available, operations read from the primary.
MONGOC_READ_NEAREST Operations read from among the nearest members of the replica set, irrespective of the member's type.

Tag Sets

Tag sets allow you to specify custom read preferences and write concerns so that your application can target operations to specific members.

Custom read preferences and write concerns evaluate tags sets in different ways: read preferences consider the value of a tag when selecting a member to read from, while write concerns ignore the value of a tag when selecting a member, except to consider whether or not the value is unique.

You can specify tag sets with the following read preference modes:

  • primaryPreferred
  • secondary
  • secondaryPreferred
  • nearest

Tags are not compatible with MONGOC_READ_PRIMARY and, in general, only apply when selecting a secondary member of a set for a read operation. However, the nearest read mode, when combined with a tag set, will select the nearest member that matches the specified tag set, which may be a primary or secondary.

Tag sets are represented as a comma-separated list of colon-separated key-value pairs when provided as a connection string, e.g. dc:ny,rack:1.

To specify a list of tag sets, using multiple readPreferenceTags, e.g.

readPreferenceTags=dc:ny,rack:1;readPreferenceTags=dc:ny;readPreferenceTags=


Note the empty value for the last one, which means "match any secondary as a last resort".

Order matters when using multiple readPreferenceTags.

Tag Sets can also be configured using mongoc_read_prefs_set_tags().

All interfaces use the same member selection logic to choose the member to which to direct read operations, basing the choice on read preference mode and tag sets.

Max Staleness

When connected to replica set running MongoDB 3.4 or later, the driver estimates the staleness of each secondary based on lastWriteDate values provided in server hello responses.

Max Staleness is the maximum replication lag in seconds (wall clock time) that a secondary can suffer and still be eligible for reads. The default is MONGOC_NO_MAX_STALENESS, which disables staleness checks. Otherwise, it must be a positive integer at least MONGOC_SMALLEST_MAX_STALENESS_SECONDS (90 seconds).

Max Staleness is also supported by sharded clusters of replica sets if all servers run MongoDB 3.4 or later.

Hedged Reads

When connecting to a sharded cluster running MongoDB 4.4 or later, reads can be sent in parallel to the two "best" hosts. Once one result returns, any other outstanding operations that were part of the hedged read are cancelled.

When the read preference mode is MONGOC_READ_NEAREST and the sharded cluster is running MongoDB 4.4 or later, hedged reads are enabled by default. Additionally, hedged reads may be explicitly enabled or disabled by calling mongoc_read_prefs_set_hedge() with a BSON document, e.g.

{

enabled: true }


Appropriate values for the enabled key are true or false.

mongoc_remove_flags_t

Flags for deletion operations

Synopsis

typedef enum {

MONGOC_REMOVE_NONE = 0,
MONGOC_REMOVE_SINGLE_REMOVE = 1 << 0, } mongoc_remove_flags_t;


Description

These flags correspond to the MongoDB wire protocol. They may be bitwise or'd together. They may change the number of documents that are removed during a remove command.

Flag Values

MONGOC_REMOVE_NONE Specify no removal flags. All matching documents will be removed.
MONGOC_REMOVE_SINGLE_REMOVE Only remove the first matching document from the selector.

mongoc_reply_flags_t

Flags from server replies

Synopsis

typedef enum {

MONGOC_REPLY_NONE = 0,
MONGOC_REPLY_CURSOR_NOT_FOUND = 1 << 0,
MONGOC_REPLY_QUERY_FAILURE = 1 << 1,
MONGOC_REPLY_SHARD_CONFIG_STALE = 1 << 2,
MONGOC_REPLY_AWAIT_CAPABLE = 1 << 3, } mongoc_reply_flags_t;


Description

These flags correspond to the wire protocol. They may be bitwise or'd together.

Flag Values

MONGOC_REPLY_NONE No flags set.
MONGOC_REPLY_CURSOR_NOT_FOUND No matching cursor was found on the server.
MONGOC_REPLY_QUERY_FAILURE The query failed or was invalid. Error document has been provided.
MONGOC_REPLY_SHARD_CONFIG_STALE Shard config is stale.
MONGOC_REPLY_AWAIT_CAPABLE If the returned cursor is capable of MONGOC_QUERY_AWAIT_DATA.

mongoc_server_api_t

A versioned API to use for connections.

Synopsis

Used to specify which version of the MongoDB server's API to use for driver connections.

The server API type takes a mongoc_server_api_version_t. It can optionally be strict about the list of allowed commands in that API version, and can also optionally provide errors for deprecated commands in that API version.

A mongoc_server_api_t can be set on a client, and will then be sent to MongoDB for most commands run using that client.

mongoc_server_api_version_t

A representation of server API version numbers.

Synopsis

Used to specify which version of the MongoDB server's API to use for driver connections.

Supported API Versions

The driver currently supports the following MongoDB API versions:

Enum value MongoDB version string
MONGOC_SERVER_API_V1 "1"

mongoc_server_description_t

Server description

Synopsis

#include <mongoc/mongoc.h>
typedef struct _mongoc_server_description_t mongoc_server_description_t


mongoc_server_description_t holds information about a mongod or mongos the driver is connected to.

Lifecycle

Clean up a mongoc_server_description_t with mongoc_server_description_destroy() when necessary.

Applications receive a temporary reference to a mongoc_server_description_t as a parameter to an SDAM Monitoring callback that must not be destroyed. See Introduction to Application Performance Monitoring.

SEE ALSO:

mongoc_client_get_server_descriptions().



mongoc_session_opt_t

#include <mongoc/mongoc.h>
typedef struct _mongoc_session_opt_t mongoc_session_opt_t;


Synopsis

Start a session with mongoc_client_start_session(), use the session for a sequence of operations and multi-document transactions, then free it with mongoc_client_session_destroy(). Any mongoc_cursor_t or mongoc_change_stream_t using a session must be destroyed before the session, and a session must be destroyed before the mongoc_client_t it came from.

By default, sessions are causally consistent. To disable causal consistency, before starting a session create a mongoc_session_opt_t with mongoc_session_opts_new() and call mongoc_session_opts_set_causal_consistency(), then free the struct with mongoc_session_opts_destroy().

Unacknowledged writes are prohibited with sessions.

A mongoc_client_session_t must be used by only one thread at a time. Due to session pooling, mongoc_client_start_session() may return a session that has been idle for some time and is about to be closed after its idle timeout. Use the session within one minute of acquiring it to refresh the session and avoid a timeout.

See the example code for mongoc_session_opts_set_causal_consistency().

mongoc_socket_t

Portable socket abstraction

Synopsis

#include <mongoc/mongoc.h>
typedef struct _mongoc_socket_t mongoc_socket_t


Synopsis

This structure provides a socket abstraction that is friendlier for portability than BSD sockets directly. Inconsistencies between Linux, various BSDs, Solaris, and Windows are handled here.

mongoc_ssl_opt_t

Synopsis

typedef struct {

const char *pem_file;
const char *pem_pwd;
const char *ca_file;
const char *ca_dir;
const char *crl_file;
bool weak_cert_validation;
bool allow_invalid_hostname;
void *internal;
void *padding[6]; } mongoc_ssl_opt_t;


Description

This structure is used to set the TLS options for a mongoc_client_t or mongoc_client_pool_t.

Beginning in version 1.2.0, once a pool or client has any TLS options set, all connections use TLS, even if ssl=true is omitted from the MongoDB URI. Before, TLS options were ignored unless tls=true was included in the URI.

As of 1.4.0, the mongoc_client_pool_set_ssl_opts() and mongoc_client_set_ssl_opts() will not only shallow copy the struct, but will also copy the const char*. It is therefore no longer needed to make sure the values remain valid after setting them.

SEE ALSO:

Configuring TLS

mongoc_client_set_ssl_opts()

mongoc_client_pool_set_ssl_opts()



mongoc_stream_buffered_t

Synopsis

typedef struct _mongoc_stream_buffered_t mongoc_stream_buffered_t;


Description

mongoc_stream_buffered_t should be considered a subclass of mongoc_stream_t. It performs buffering on an underlying stream.

SEE ALSO:

mongoc_stream_buffered_new()

mongoc_stream_destroy()



mongoc_stream_file_t

Synopsis

typedef struct _mongoc_stream_file_t mongoc_stream_file_t


mongoc_stream_file_t is a mongoc_stream_t subclass for working with standard UNIX style file-descriptors.

mongoc_stream_socket_t

Synopsis

typedef struct _mongoc_stream_socket_t mongoc_stream_socket_t


mongoc_stream_socket_t should be considered a subclass of mongoc_stream_t that works upon socket streams.

mongoc_stream_t

Synopsis

typedef struct _mongoc_stream_t mongoc_stream_t


mongoc_stream_t provides a generic streaming IO abstraction based on a struct of pointers interface. The idea is to allow wrappers, perhaps other language drivers, to easily shim their IO system on top of mongoc_stream_t.

The API for the stream abstraction is currently private and non-extensible.

Stream Types

There are a number of built in stream types that come with mongoc. The default configuration is a buffered unix stream. If TLS is in use, that in turn is wrapped in a tls stream.

SEE ALSO:

mongoc_stream_buffered_t

mongoc_stream_file_t

mongoc_stream_socket_t

mongoc_stream_tls_t



mongoc_stream_tls_t

Synopsis

typedef struct _mongoc_stream_tls_t mongoc_stream_tls_t


mongoc_stream_tls_t is a mongoc_stream_t subclass for working with TLS streams.

mongoc_topology_description_t

Status of MongoDB Servers

Synopsis

typedef struct _mongoc_topology_description_t mongoc_topology_description_t;


mongoc_topology_description_t is an opaque type representing the driver's knowledge of the MongoDB server or servers it is connected to. Its API conforms to the SDAM Monitoring Specification.

Applications receive a temporary reference to a mongoc_topology_description_t as a parameter to an SDAM Monitoring callback that must not be destroyed. See Introduction to Application Performance Monitoring.

mongoc_transaction_opt_t

#include <mongoc/mongoc.h>
typedef struct _mongoc_transaction_opt_t mongoc_transaction_opt_t;


Synopsis

Options for starting a multi-document transaction.

When a session is first created with mongoc_client_start_session(), it inherits from the client the read concern, write concern, and read preference with which to start transactions. Each of these fields can be overridden independently. Create a mongoc_transaction_opt_t with mongoc_transaction_opts_new(), and pass a non-NULL option to any of the mongoc_transaction_opt_t setter functions:

  • mongoc_transaction_opts_set_read_concern()
  • mongoc_transaction_opts_set_write_concern()
  • mongoc_transaction_opts_set_read_prefs()

Pass the resulting transaction options to mongoc_client_session_start_transaction(). Each field set in the transaction options overrides the inherited client configuration.

Example

example-transaction.c

/* gcc example-transaction.c -o example-transaction \

* $(pkg-config --cflags --libs libmongoc-1.0) */ /* ./example-transaction [CONNECTION_STRING] */ #include <stdio.h> #include <mongoc/mongoc.h> int main (int argc, char *argv[]) {
int exit_code = EXIT_FAILURE;
mongoc_client_t *client = NULL;
mongoc_database_t *database = NULL;
mongoc_collection_t *collection = NULL;
mongoc_client_session_t *session = NULL;
mongoc_session_opt_t *session_opts = NULL;
mongoc_transaction_opt_t *default_txn_opts = NULL;
mongoc_transaction_opt_t *txn_opts = NULL;
mongoc_read_concern_t *read_concern = NULL;
mongoc_write_concern_t *write_concern = NULL;
const char *uri_string = "mongodb://127.0.0.1/?appname=transaction-example";
mongoc_uri_t *uri;
bson_error_t error;
bson_t *doc = NULL;
bson_t *insert_opts = NULL;
int32_t i;
int64_t start;
bson_t reply = BSON_INITIALIZER;
char *reply_json;
bool r;
mongoc_init ();
if (argc > 1) {
uri_string = argv[1];
}
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
MONGOC_ERROR ("failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
goto done;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
goto done;
}
mongoc_client_set_error_api (client, 2);
database = mongoc_client_get_database (client, "example-transaction");
/* inserting into a nonexistent collection normally creates it, but a
* collection can't be created in a transaction; create it now */
collection =
mongoc_database_create_collection (database, "collection", NULL, &error);
if (!collection) {
/* code 48 is NamespaceExists, see error_codes.err in mongodb source */
if (error.code == 48) {
collection = mongoc_database_get_collection (database, "collection");
} else {
MONGOC_ERROR ("Failed to create collection: %s", error.message);
goto done;
}
}
/* a transaction's read preferences, read concern, and write concern can be
* set on the client, on the default transaction options, or when starting
* the transaction. for the sake of this example, set read concern on the
* default transaction options. */
default_txn_opts = mongoc_transaction_opts_new ();
read_concern = mongoc_read_concern_new ();
mongoc_read_concern_set_level (read_concern, "snapshot");
mongoc_transaction_opts_set_read_concern (default_txn_opts, read_concern);
session_opts = mongoc_session_opts_new ();
mongoc_session_opts_set_default_transaction_opts (session_opts,
default_txn_opts);
session = mongoc_client_start_session (client, session_opts, &error);
if (!session) {
MONGOC_ERROR ("Failed to start session: %s", error.message);
goto done;
}
/* in this example, set write concern when starting the transaction */
txn_opts = mongoc_transaction_opts_new ();
write_concern = mongoc_write_concern_new ();
mongoc_write_concern_set_wmajority (write_concern, 1000 /* wtimeout */);
mongoc_transaction_opts_set_write_concern (txn_opts, write_concern);
insert_opts = bson_new ();
if (!mongoc_client_session_append (session, insert_opts, &error)) {
MONGOC_ERROR ("Could not add session to opts: %s", error.message);
goto done;
} retry_transaction:
r = mongoc_client_session_start_transaction (session, txn_opts, &error);
if (!r) {
MONGOC_ERROR ("Failed to start transaction: %s", error.message);
goto done;
}
/* insert two documents - on error, retry the whole transaction */
for (i = 0; i < 2; i++) {
doc = BCON_NEW ("_id", BCON_INT32 (i));
bson_destroy (&reply);
r = mongoc_collection_insert_one (
collection, doc, insert_opts, &reply, &error);
bson_destroy (doc);
if (!r) {
MONGOC_ERROR ("Insert failed: %s", error.message);
mongoc_client_session_abort_transaction (session, NULL);
/* a network error, primary failover, or other temporary error in a
* transaction includes {"errorLabels": ["TransientTransactionError"]},
* meaning that trying the entire transaction again may succeed
*/
if (mongoc_error_has_label (&reply, "TransientTransactionError")) {
goto retry_transaction;
}
goto done;
}
reply_json = bson_as_json (&reply, NULL);
printf ("%s\n", reply_json);
bson_free (reply_json);
}
/* in case of transient errors, retry for 5 seconds to commit transaction */
start = bson_get_monotonic_time ();
while (bson_get_monotonic_time () - start < 5 * 1000 * 1000) {
bson_destroy (&reply);
r = mongoc_client_session_commit_transaction (session, &reply, &error);
if (r) {
/* success */
break;
} else {
MONGOC_ERROR ("Warning: commit failed: %s", error.message);
if (mongoc_error_has_label (&reply, "TransientTransactionError")) {
goto retry_transaction;
} else if (mongoc_error_has_label (&reply,
"UnknownTransactionCommitResult")) {
/* try again to commit */
continue;
}
/* unrecoverable error trying to commit */
break;
}
}
exit_code = EXIT_SUCCESS; done:
bson_destroy (&reply);
bson_destroy (insert_opts);
mongoc_write_concern_destroy (write_concern);
mongoc_read_concern_destroy (read_concern);
mongoc_transaction_opts_destroy (txn_opts);
mongoc_transaction_opts_destroy (default_txn_opts);
mongoc_client_session_destroy (session);
mongoc_collection_destroy (collection);
mongoc_database_destroy (database);
mongoc_uri_destroy (uri);
mongoc_client_destroy (client);
mongoc_cleanup ();
return exit_code; }


mongoc_transaction_state_t

Constants for transaction states

Synopsis

typedef enum {

MONGOC_TRANSACTION_NONE = 0,
MONGOC_TRANSACTION_STARTING = 1,
MONGOC_TRANSACTION_IN_PROGRESS = 2,
MONGOC_TRANSACTION_COMMITTED = 3,
MONGOC_TRANSACTION_ABORTED = 4, } mongoc_transaction_state_t;


Description

These constants describe the current transaction state of a session.

Flag Values

MONGOC_TRANSACTION_NONE There is no transaction in progress.
MONGOC_TRANSACTION_STARTING A transaction has been started, but no operation has been sent to the server.
MONGOC_TRANSACTION_IN_PROGRESS A transaction is in progress.
MONGOC_TRANSACTION_COMMITTED The transaction was committed.
MONGOC_TRANSACTION_ABORTED The transaction was aborted.

mongoc_update_flags_t

Flags for update operations

Synopsis

typedef enum {

MONGOC_UPDATE_NONE = 0,
MONGOC_UPDATE_UPSERT = 1 << 0,
MONGOC_UPDATE_MULTI_UPDATE = 1 << 1, } mongoc_update_flags_t; #define MONGOC_UPDATE_NO_VALIDATE (1U << 31)


Description

These flags correspond to the MongoDB wire protocol. They may be bitwise or'd together. The allow for modifying the way an update is performed in the MongoDB server.

Flag Values

MONGOC_UPDATE_NONE No update flags set.
MONGOC_UPDATE_UPSERT If an upsert should be performed.
MONGOC_UPDATE_MULTI_UPDATE If more than a single matching document should be updated. By default only the first document is updated.
MONGOC_UPDATE_NO_VALIDATE Do not perform client side BSON validations when performing an update. This is useful if you already know your BSON documents are valid.

mongoc_uri_t

Synopsis

typedef struct _mongoc_uri_t mongoc_uri_t;


Description

mongoc_uri_t provides an abstraction on top of the MongoDB connection URI format. It provides standardized parsing as well as convenience methods for extracting useful information such as replica hosts or authorization information.

See Connection String URI Reference on the MongoDB website for more information.

Format

mongodb[+srv]://                             <1>

[username:password@] <2>
host1 <3>
[:port1] <4>
[,host2[:port2],...[,hostN[:portN]]] <5>
[/[database] <6>
[?options]] <7>


1.
"mongodb" is the specifier of the MongoDB protocol. Use "mongodb+srv" with a single service name in place of "host1" to specify the initial list of servers with an SRV record.
2.
An optional username and password.
3.
The only required part of the uri. This specifies either a hostname, IPv4 address, IPv6 address enclosed in "[" and "]", or UNIX domain socket.
4.
An optional port number. Defaults to :27017.
5.
Extra optional hosts and ports. You would specify multiple hosts, for example, for connections to replica sets.
6.
The name of the database to authenticate if the connection string includes authentication credentials. If /database is not specified and the connection string includes credentials, defaults to the 'admin' database.
7.
Connection specific options.

NOTE:

Option names are case-insensitive. Do not repeat the same option (e.g. "mongodb://localhost/db?opt=value1&OPT=value2") since this may have unexpected results.


The MongoDB C Driver exposes constants for each supported connection option. These constants make it easier to discover connection options, but their string values can be used as well.

For example, the following calls are equal.

uri = mongoc_uri_new ("mongodb://localhost/?" MONGOC_URI_APPNAME "=applicationName");
uri = mongoc_uri_new ("mongodb://localhost/?appname=applicationName");
uri = mongoc_uri_new ("mongodb://localhost/?appName=applicationName");


Replica Set Example

To describe a connection to a replica set named 'test' with the following mongod hosts:

  • db1.example.com on port 27017
  • db2.example.com on port 2500

You would use a connection string that resembles the following.


SRV Example

If you have configured an SRV record with a name like "_mongodb._tcp.server.example.com" whose records are a list of one or more MongoDB server hostnames, use a connection string like this:


The driver prefixes the service name with "_mongodb._tcp.", then performs a DNS SRV query to resolve the service name to one or more hostnames. If this query succeeds, the driver performs a DNS TXT query on the service name (without the "_mongodb._tcp" prefix) for additional URI options configured as TXT records.

On Unix, the MongoDB C Driver relies on libresolv to look up SRV and TXT records. If libresolv is unavailable, then using a "mongodb+srv" URI will cause an error. If your libresolv lacks res_nsearch then the driver will fall back to res_search, which is not thread-safe.

IPv4 and IPv6

If connecting to a hostname that has both IPv4 and IPv6 DNS records, the behavior follows RFC-6555. A connection to the IPv6 address is attempted first. If IPv6 fails, then a connection is attempted to the IPv4 address. If the connection attempt to IPv6 does not complete within 250ms, then IPv4 is tried in parallel. Whichever succeeds connection first cancels the other. The successful DNS result is cached for 10 minutes.

As a consequence, attempts to connect to a mongod only listening on IPv4 may be delayed if there are both A (IPv4) and AAAA (IPv6) DNS records associated with the host.

To avoid a delay, configure hostnames to match the MongoDB configuration. That is, only create an A record if the mongod is only listening on IPv4.

Connection Options

Constant Key Default Description
MONGOC_URI_RETRYREADS retryreads true If "true" and the server is a MongoDB 3.6+ standalone, replica set, or sharded cluster, the driver safely retries a read that failed due to a network error or replica set failover.
MONGOC_URI_RETRYWRITES retrywrites true if driver built w/ TLS If "true" and the server is a MongoDB 3.6+ replica set or sharded cluster, the driver safely retries a write that failed due to a network error or replica set failover. Only inserts, updates of single documents, or deletes of single documents are retried.
MONGOC_URI_APPNAME appname Empty (no appname) The client application name. This value is used by MongoDB when it logs connection information and profile information, such as slow queries.
MONGOC_URI_TLS tls Empty (not set, same as false) {true|false}, indicating if TLS must be used. (See also mongoc_client_set_ssl_opts() and mongoc_client_pool_set_ssl_opts().)
MONGOC_URI_COMPRESSORS compressors Empty (no compressors) Comma separated list of compressors, if any, to use to compress the wire protocol messages. Snappy, zlib, and zstd are optional build time dependencies, and enable the "snappy", "zlib", and "zstd" values respectively.
MONGOC_URI_CONNECTTIMEOUTMS connecttimeoutms 10,000 ms (10 seconds) This setting applies to new server connections. It is also used as the socket timeout for server discovery and monitoring operations.
MONGOC_URI_SOCKETTIMEOUTMS sockettimeoutms 300,000 ms (5 minutes) The time in milliseconds to attempt to send or receive on a socket before the attempt times out.
MONGOC_URI_REPLICASET replicaset Empty (no replicaset) The name of the Replica Set that the driver should connect to.
MONGOC_URI_ZLIBCOMPRESSIONLEVEL zlibcompressionlevel -1 When the MONGOC_URI_COMPRESSORS includes "zlib" this options configures the zlib compression level, when the zlib compressor is used to compress client data.
MONGOC_URI_LOADBALANCED loadbalanced false If true, this indicates the driver is connecting to a MongoDB cluster behind a load balancer.
MONGOC_URI_SRVMAXHOSTS srvmaxhosts 0 If zero, the number of hosts in DNS results is unlimited. If greater than zero, the number of hosts in DNS results is limited to being less than or equal to the given value.

Setting any of the *timeoutMS options above to 0 will be interpreted as "use the default value".

Authentication Options

Constant Key Description
MONGOC_URI_AUTHMECHANISM authmechanism Specifies the mechanism to use when authenticating as the provided user. See Authentication for supported values.
MONGOC_URI_AUTHMECHANISMPROPERTIES authmechanismproperties Certain authentication mechanisms have additional options that can be configured. These options should be provided as comma separated option_key:option_value pair and provided as authMechanismProperties. Specifying the same option_key multiple times has undefined behavior.
MONGOC_URI_AUTHSOURCE authsource The authSource defines the database that should be used to authenticate to. It is unnecessary to provide this option the database name is the same as the database used in the URI.

Mechanism Properties

Constant Key Description
MONGOC_URI_CANONICALIZEHOSTNAME canonicalizehostname Use the canonical hostname of the service, rather than its configured alias, when authenticating with Cyrus-SASL Kerberos.
MONGOC_URI_GSSAPISERVICENAME gssapiservicename Use alternative service name. The default is mongodb.

TLS Options

Constant Key Description
MONGOC_URI_TLS tls {true|false}, indicating if TLS must be used.
MONGOC_URI_TLSCERTIFICATEKEYFILE tlscertificatekeyfile Path to PEM formatted Private Key, with its Public Certificate concatenated at the end.
MONGOC_URI_TLSCERTIFICATEKEYFILEPASSWORD tlscertificatekeypassword The password, if any, to use to unlock encrypted Private Key.
MONGOC_URI_TLSCAFILE tlscafile One, or a bundle of, Certificate Authorities whom should be considered to be trusted.
MONGOC_URI_TLSALLOWINVALIDCERTIFICATES tlsallowinvalidcertificates Accept and ignore certificate verification errors (e.g. untrusted issuer, expired, etc.)
MONGOC_URI_TLSALLOWINVALIDHOSTNAMES tlsallowinvalidhostnames Ignore hostname verification of the certificate (e.g. Man In The Middle, using valid certificate, but issued for another hostname)
MONGOC_URI_TLSINSECURE tlsinsecure {true|false}, indicating if insecure TLS options should be used. Currently this implies MONGOC_URI_TLSALLOWINVALIDCERTIFICATES and MONGOC_URI_TLSALLOWINVALIDHOSTNAMES.
MONGOC_URI_TLSDISABLECERTIFICATEREVOCATIONCHECK tlsdisablecertificaterevocationcheck {true|false}, indicates if revocation checking (CRL / OCSP) should be disabled.
MONGOC_URI_TLSDISABLEOCSPENDPOINTCHECK tlsdisableocspendpointcheck {true|false}, indicates if OCSP responder endpoints should not be requested when an OCSP response is not stapled.

See Configuring TLS for details about these options and about building libmongoc with TLS support.

Deprecated SSL Options

The following options have been deprecated and may be removed from future releases of libmongoc.

Constant Key Deprecated For Key
MONGOC_URI_SSL ssl MONGOC_URI_TLS tls
MONGOC_URI_SSLCLIENTCERTIFICATEKEYFILE sslclientcertificatekeyfile MONGOC_URI_TLSCERTIFICATEKEYFILE tlscertificatekeyfile
MONGOC_URI_SSLCLIENTCERTIFICATEKEYPASSWORD sslclientcertificatekeypassword MONGOC_URI_TLSCERTIFICATEKEYFILEPASSWORD tlscertificatekeypassword
MONGOC_URI_SSLCERTIFICATEAUTHORITYFILE sslcertificateauthorityfile MONGOC_URI_TLSCAFILE tlscafile
MONGOC_URI_SSLALLOWINVALIDCERTIFICATES sslallowinvalidcertificates MONGOC_URI_TLSALLOWINVALIDCERTIFICATES tlsallowinvalidcertificates
MONGOC_URI_SSLALLOWINVALIDHOSTNAMES sslallowinvalidhostnames MONGOC_URI_TLSALLOWINVALIDHOSTNAMES tlsallowinvalidhostnames

Server Discovery, Monitoring, and Selection Options

Clients in a mongoc_client_pool_t share a topology scanner that runs on a background thread. The thread wakes every heartbeatFrequencyMS (default 10 seconds) to scan all MongoDB servers in parallel. Whenever an application operation requires a server that is not known--for example, if there is no known primary and your application attempts an insert--the thread rescans all servers every half-second. In this situation the pooled client waits up to serverSelectionTimeoutMS (default 30 seconds) for the thread to find a server suitable for the operation, then returns an error with domain MONGOC_ERROR_SERVER_SELECTION.

Technically, the total time an operation may wait while a pooled client scans the topology is controlled both by serverSelectionTimeoutMS and connectTimeoutMS. The longest wait occurs if the last scan begins just at the end of the selection timeout, and a slow or down server requires the full connection timeout before the client gives up.

A non-pooled client is single-threaded. Every heartbeatFrequencyMS, it blocks the next application operation while it does a parallel scan. This scan takes as long as needed to check the slowest server: roughly connectTimeoutMS. Therefore the default heartbeatFrequencyMS for single-threaded clients is greater than for pooled clients: 60 seconds.

By default, single-threaded (non-pooled) clients scan only once when an operation requires a server that is not known. If you attempt an insert and there is no known primary, the client checks all servers once trying to find it, then succeeds or returns an error with domain MONGOC_ERROR_SERVER_SELECTION. But if you set serverSelectionTryOnce to "false", the single-threaded client loops, checking all servers every half-second, until serverSelectionTimeoutMS.

The total time an operation may wait for a single-threaded client to scan the topology is determined by connectTimeoutMS in the try-once case, or serverSelectionTimeoutMS and connectTimeoutMS if serverSelectionTryOnce is set "false".

Constant Key Description
MONGOC_URI_HEARTBEATFREQUENCYMS heartbeatfrequencyms The interval between server monitoring checks. Defaults to 10,000ms (10 seconds) in pooled (multi-threaded) mode, 60,000ms (60 seconds) in non-pooled mode (single-threaded).
MONGOC_URI_SERVERSELECTIONTIMEOUTMS serverselectiontimeoutms A timeout in milliseconds to block for server selection before throwing an exception. The default is 30,0000ms (30 seconds).
MONGOC_URI_SERVERSELECTIONTRYONCE serverselectiontryonce If "true", the driver scans the topology exactly once after server selection fails, then either selects a server or returns an error. If it is false, then the driver repeatedly searches for a suitable server for up to serverSelectionTimeoutMS milliseconds (pausing a half second between attempts). The default for serverSelectionTryOnce is "false" for pooled clients, otherwise "true". Pooled clients ignore serverSelectionTryOnce; they signal the thread to rescan the topology every half-second until serverSelectionTimeoutMS expires.
MONGOC_URI_SOCKETCHECKINTERVALMS socketcheckintervalms Only applies to single threaded clients. If a socket has not been used within this time, its connection is checked with a quick "hello" call before it is used again. Defaults to 5,000ms (5 seconds).
MONGOC_URI_DIRECTCONNECTION directconnection If "true", the driver connects to a single server directly and will not monitor additional servers. If "false", the driver connects based on the presence and value of the replicaSet option.

Setting any of the *TimeoutMS options above to 0 will be interpreted as "use the default value".

Connection Pool Options

These options govern the behavior of a mongoc_client_pool_t. They are ignored by a non-pooled mongoc_client_t.

Constant Key Description
MONGOC_URI_MAXPOOLSIZE maxpoolsize The maximum number of clients created by a mongoc_client_pool_t total (both in the pool and checked out). The default value is 100. Once it is reached, mongoc_client_pool_pop() blocks until another thread pushes a client.
MONGOC_URI_MINPOOLSIZE minpoolsize Deprecated. This option's behavior does not match its name, and its actual behavior will likely hurt performance.
MONGOC_URI_MAXIDLETIMEMS maxidletimems Not implemented.
MONGOC_URI_WAITQUEUEMULTIPLE waitqueuemultiple Not implemented.
MONGOC_URI_WAITQUEUETIMEOUTMS waitqueuetimeoutms The maximum time to wait for a client to become available from the pool.

Write Concern Options

Constant Key Description
MONGOC_URI_W w Determines the write concern (guarantee). Valid values: 0.0 • 2 0 = The driver will not acknowledge write operations but will pass or handle any network and socket errors that it receives to the client. If you disable write concern but enable the getLastError command’s w option, w overrides the w option. • 2 1 = Provides basic acknowledgement of write operations. By specifying 1, you require that a standalone mongod instance, or the primary for replica sets, acknowledge all write operations. For drivers released after the default write concern change, this is the default write concern setting. • 2 majority = For replica sets, if you specify the special majority value to w option, write operations will only return successfully after a majority of the configured replica set members have acknowledged the write operation. • 2 n = For replica sets, if you specify a number n greater than 1, operations with this write concern return only after n members of the set have acknowledged the write. If you set n to a number that is greater than the number of available set members or members that hold data, MongoDB will wait, potentially indefinitely, for these members to become available. • 2 tags = For replica sets, you can specify a tag set to require that all members of the set that have these tags configured return confirmation of the write operation. 168u
MONGOC_URI_WTIMEOUTMS wtimeoutms The time in milliseconds to wait for replication to succeed, as specified in the w option, before timing out. When wtimeoutMS is 0, write operations will never time out.
MONGOC_URI_JOURNAL journal Controls whether write operations will wait until the mongod acknowledges the write operations and commits the data to the on disk journal. 0.0 • 2 true = Enables journal commit acknowledgement write concern. Equivalent to specifying the getLastError command with the j option enabled. • 2 false = Does not require that mongod commit write operations to the journal before acknowledging the write operation. This is the default option for the journal parameter. 168u

Read Concern Options

Constant Key Description
MONGOC_URI_READCONCERNLEVEL readconcernlevel The level of isolation for read operations. If the level is left unspecified, the server default will be used. See readConcern in the MongoDB Manual for details.

Read Preference Options

When connected to a replica set, the driver chooses which member to query using the read preference:

1.
Choose members whose type matches "readPreference".
2.
From these, if there are any tags sets configured, choose members matching the first tag set. If there are none, fall back to the next tag set and so on, until some members are chosen or the tag sets are exhausted.
3.
From the chosen servers, distribute queries randomly among the server with the fastest round-trip times. These include the server with the fastest time and any whose round-trip time is no more than "localThresholdMS" slower.

Constant Key Description
MONGOC_URI_READPREFERENCE readpreference Specifies the replica set read preference for this connection. This setting overrides any secondaryOk value. The read preference values are the following: 0.0 • 2 primary (default) • 2 primaryPreferred • 2 secondary • 2 secondaryPreferred • 2 nearest 168u
MONGOC_URI_READPREFERENCETAGS readpreferencetags A representation of a tag set. See also Tag Sets.
MONGOC_URI_LOCALTHRESHOLDMS localthresholdms How far to distribute queries, beyond the server with the fastest round-trip time. By default, only servers within 15ms of the fastest round-trip time receive queries.
MONGOC_URI_MAXSTALENESSSECONDS maxstalenessseconds The maximum replication lag, in wall clock time, that a secondary can suffer and still be eligible. The smallest allowed value for maxStalenessSeconds is 90 seconds.

NOTE:

When connecting to more than one mongos, libmongoc's localThresholdMS applies only to the selection of mongos servers. The threshold for selecting among replica set members in shards is controlled by the mongos's localThreshold command line option.


Legacy Options

For historical reasons, the following options are available. They should however not be used.

Constant Key Description
MONGOC_URI_SAFE safe {true|false} Same as w={1|0}

Version Checks

Conditional compilation based on mongoc version

Description

The following preprocessor macros can be used to perform various checks based on the version of the library you are compiling against. This may be useful if you only want to enable a feature on a certain version of the library.

#include <mongoc/mongoc.h>
#define MONGOC_MAJOR_VERSION (x)
#define MONGOC_MINOR_VERSION (y)
#define MONGOC_MICRO_VERSION (z)
#define MONGOC_VERSION_S     "x.y.z"
#define MONGOC_VERSION_HEX   ((1 << 24) | (0 << 16) | (0 << 8) | 0)
#define MONGOC_CHECK_VERSION(major, minor, micro)


Only compile a block on MongoDB C Driver 1.1.0 and newer.

#if MONGOC_CHECK_VERSION(1, 1, 0)
static void
do_something (void)
{
}
#endif


mongoc_write_concern_t

Write Concern abstraction

Synopsis

mongoc_write_concern_t tells the driver what level of acknowledgement to await from the server. The default, MONGOC_WRITE_CONCERN_W_DEFAULT, is right for the great majority of applications.

You can specify a write concern on connection objects, database objects, collection objects, or per-operation. Data-modifying operations typically use the write concern of the object they operate on, and check the server response for a write concern error or write concern timeout. For example, mongoc_collection_drop_index() uses the collection's write concern, and a write concern error or timeout in the response is considered a failure.

Exceptions to this principle are the generic command functions:

  • mongoc_client_command()
  • mongoc_client_command_simple()
  • mongoc_database_command()
  • mongoc_database_command_simple()
  • mongoc_collection_command()
  • mongoc_collection_command_simple()

These generic command functions do not automatically apply a write concern, and they do not check the server response for a write concern error or write concern timeout.

See Write Concern on the MongoDB website for more information.

Write Concern Levels

Set the write concern level with mongoc_write_concern_set_w().

MONGOC_WRITE_CONCERN_W_DEFAULT (1) By default, writes block awaiting acknowledgement from MongoDB. Acknowledged write concern allows clients to catch network, duplicate key, and other errors.
MONGOC_WRITE_CONCERN_W_UNACKNOWLEDGED (0) With this write concern, MongoDB does not acknowledge the receipt of write operation. Unacknowledged is similar to errors ignored; however, mongoc attempts to receive and handle network errors when possible.
MONGOC_WRITE_CONCERN_W_MAJORITY (majority) Block until a write has been propagated to a majority of the nodes in the replica set.
n Block until a write has been propagated to at least n nodes in the replica set.

Deprecations

The write concern MONGOC_WRITE_CONCERN_W_ERRORS_IGNORED (value -1) is a deprecated synonym for MONGOC_WRITE_CONCERN_W_UNACKNOWLEDGED (value 0), and will be removed in the next major release.

mongoc_write_concern_set_fsync() is deprecated.

Application Performance Monitoring (APM)

The MongoDB C Driver allows you to monitor all the MongoDB operations the driver executes. This event-notification system conforms to two MongoDB driver specs:

  • Command Monitoring: events related to all application operations.
  • SDAM Monitoring: events related to the driver's Server Discovery And Monitoring logic.

To receive notifications, create a mongoc_apm_callbacks_t with mongoc_apm_callbacks_new(), set callbacks on it, then pass it to mongoc_client_set_apm_callbacks() or mongoc_client_pool_set_apm_callbacks().

Command-Monitoring Example

example-command-monitoring.c

/* gcc example-command-monitoring.c -o example-command-monitoring \

* $(pkg-config --cflags --libs libmongoc-1.0) */ /* ./example-command-monitoring [CONNECTION_STRING] */ #include <mongoc/mongoc.h> #include <stdio.h> typedef struct {
int started;
int succeeded;
int failed; } stats_t; void command_started (const mongoc_apm_command_started_t *event) {
char *s;
s = bson_as_relaxed_extended_json (
mongoc_apm_command_started_get_command (event), NULL);
printf ("Command %s started on %s:\n%s\n\n",
mongoc_apm_command_started_get_command_name (event),
mongoc_apm_command_started_get_host (event)->host,
s);
((stats_t *) mongoc_apm_command_started_get_context (event))->started++;
bson_free (s); } void command_succeeded (const mongoc_apm_command_succeeded_t *event) {
char *s;
s = bson_as_relaxed_extended_json (
mongoc_apm_command_succeeded_get_reply (event), NULL);
printf ("Command %s succeeded:\n%s\n\n",
mongoc_apm_command_succeeded_get_command_name (event),
s);
((stats_t *) mongoc_apm_command_succeeded_get_context (event))->succeeded++;
bson_free (s); } void command_failed (const mongoc_apm_command_failed_t *event) {
bson_error_t error;
mongoc_apm_command_failed_get_error (event, &error);
printf ("Command %s failed:\n\"%s\"\n\n",
mongoc_apm_command_failed_get_command_name (event),
error.message);
((stats_t *) mongoc_apm_command_failed_get_context (event))->failed++; } int main (int argc, char *argv[]) {
mongoc_client_t *client;
mongoc_apm_callbacks_t *callbacks;
stats_t stats = {0};
mongoc_collection_t *collection;
bson_error_t error;
const char *uri_string =
"mongodb://127.0.0.1/?appname=cmd-monitoring-example";
mongoc_uri_t *uri;
const char *collection_name = "test";
bson_t *docs[2];
mongoc_init ();
if (argc > 1) {
uri_string = argv[1];
}
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
mongoc_client_set_error_api (client, 2);
callbacks = mongoc_apm_callbacks_new ();
mongoc_apm_set_command_started_cb (callbacks, command_started);
mongoc_apm_set_command_succeeded_cb (callbacks, command_succeeded);
mongoc_apm_set_command_failed_cb (callbacks, command_failed);
mongoc_client_set_apm_callbacks (
client, callbacks, (void *) &stats /* context pointer */);
collection = mongoc_client_get_collection (client, "test", collection_name);
mongoc_collection_drop (collection, NULL);
docs[0] = BCON_NEW ("_id", BCON_INT32 (0));
docs[1] = BCON_NEW ("_id", BCON_INT32 (1));
mongoc_collection_insert_many (
collection, (const bson_t **) docs, 2, NULL, NULL, NULL);
/* duplicate key error on the second insert */
mongoc_collection_insert_one (collection, docs[0], NULL, NULL, NULL);
mongoc_collection_destroy (collection);
mongoc_apm_callbacks_destroy (callbacks);
mongoc_uri_destroy (uri);
mongoc_client_destroy (client);
printf ("started: %d\nsucceeded: %d\nfailed: %d\n",
stats.started,
stats.succeeded,
stats.failed);
bson_destroy (docs[0]);
bson_destroy (docs[1]);
mongoc_cleanup ();
return EXIT_SUCCESS; }


This example program prints:

Command drop started on 127.0.0.1:
{ "drop" : "test" }
Command drop succeeded:
{ "ns" : "test.test", "nIndexesWas" : 1, "ok" : 1.0 }
Command insert started on 127.0.0.1:
{

"insert" : "test",
"ordered" : true,
"documents" : [
{ "_id" : 0 }, { "_id" : 1 }
] } Command insert succeeded: { "n" : 2, "ok" : 1.0 } Command insert started on 127.0.0.1: {
"insert" : "test",
"ordered" : true,
"documents" : [
{ "_id" : 0 }
] } Command insert succeeded: {
"n" : 0,
"writeErrors" : [
{ "index" : 0, "code" : 11000, "errmsg" : "duplicate key" }
],
"ok" : 1.0 } started: 3 succeeded: 3 failed: 0


The output has been edited and formatted for clarity. Depending on your server configuration, messages may include metadata like database name, logical session ids, or cluster times that are not shown here.

The final "insert" command is considered successful, despite the writeError, because the server replied to the overall command with "ok": 1.

SDAM Monitoring Example

example-sdam-monitoring.c

/* gcc example-sdam-monitoring.c -o example-sdam-monitoring \

* $(pkg-config --cflags --libs libmongoc-1.0) */ /* ./example-sdam-monitoring [CONNECTION_STRING] */ #include <mongoc/mongoc.h> #include <stdio.h> typedef struct {
int server_changed_events;
int server_opening_events;
int server_closed_events;
int topology_changed_events;
int topology_opening_events;
int topology_closed_events;
int heartbeat_started_events;
int heartbeat_succeeded_events;
int heartbeat_failed_events; } stats_t; static void server_changed (const mongoc_apm_server_changed_t *event) {
stats_t *context;
const mongoc_server_description_t *prev_sd, *new_sd;
context = (stats_t *) mongoc_apm_server_changed_get_context (event);
context->server_changed_events++;
prev_sd = mongoc_apm_server_changed_get_previous_description (event);
new_sd = mongoc_apm_server_changed_get_new_description (event);
printf ("server changed: %s %s -> %s\n",
mongoc_apm_server_changed_get_host (event)->host_and_port,
mongoc_server_description_type (prev_sd),
mongoc_server_description_type (new_sd)); } static void server_opening (const mongoc_apm_server_opening_t *event) {
stats_t *context;
context = (stats_t *) mongoc_apm_server_opening_get_context (event);
context->server_opening_events++;
printf ("server opening: %s\n",
mongoc_apm_server_opening_get_host (event)->host_and_port); } static void server_closed (const mongoc_apm_server_closed_t *event) {
stats_t *context;
context = (stats_t *) mongoc_apm_server_closed_get_context (event);
context->server_closed_events++;
printf ("server closed: %s\n",
mongoc_apm_server_closed_get_host (event)->host_and_port); } static void topology_changed (const mongoc_apm_topology_changed_t *event) {
stats_t *context;
const mongoc_topology_description_t *prev_td;
const mongoc_topology_description_t *new_td;
mongoc_server_description_t **prev_sds;
size_t n_prev_sds;
mongoc_server_description_t **new_sds;
size_t n_new_sds;
size_t i;
mongoc_read_prefs_t *prefs;
context = (stats_t *) mongoc_apm_topology_changed_get_context (event);
context->topology_changed_events++;
prev_td = mongoc_apm_topology_changed_get_previous_description (event);
prev_sds = mongoc_topology_description_get_servers (prev_td, &n_prev_sds);
new_td = mongoc_apm_topology_changed_get_new_description (event);
new_sds = mongoc_topology_description_get_servers (new_td, &n_new_sds);
printf ("topology changed: %s -> %s\n",
mongoc_topology_description_type (prev_td),
mongoc_topology_description_type (new_td));
if (n_prev_sds) {
printf (" previous servers:\n");
for (i = 0; i < n_prev_sds; i++) {
printf (" %s %s\n",
mongoc_server_description_type (prev_sds[i]),
mongoc_server_description_host (prev_sds[i])->host_and_port);
}
}
if (n_new_sds) {
printf (" new servers:\n");
for (i = 0; i < n_new_sds; i++) {
printf (" %s %s\n",
mongoc_server_description_type (new_sds[i]),
mongoc_server_description_host (new_sds[i])->host_and_port);
}
}
prefs = mongoc_read_prefs_new (MONGOC_READ_SECONDARY);
/* it is safe, and unfortunately necessary, to cast away const here */
if (mongoc_topology_description_has_readable_server (
(mongoc_topology_description_t *) new_td, prefs)) {
printf (" secondary AVAILABLE\n");
} else {
printf (" secondary UNAVAILABLE\n");
}
if (mongoc_topology_description_has_writable_server (
(mongoc_topology_description_t *) new_td)) {
printf (" primary AVAILABLE\n");
} else {
printf (" primary UNAVAILABLE\n");
}
mongoc_read_prefs_destroy (prefs);
mongoc_server_descriptions_destroy_all (prev_sds, n_prev_sds);
mongoc_server_descriptions_destroy_all (new_sds, n_new_sds); } static void topology_opening (const mongoc_apm_topology_opening_t *event) {
stats_t *context;
context = (stats_t *) mongoc_apm_topology_opening_get_context (event);
context->topology_opening_events++;
printf ("topology opening\n"); } static void topology_closed (const mongoc_apm_topology_closed_t *event) {
stats_t *context;
context = (stats_t *) mongoc_apm_topology_closed_get_context (event);
context->topology_closed_events++;
printf ("topology closed\n"); } static void server_heartbeat_started (const mongoc_apm_server_heartbeat_started_t *event) {
stats_t *context;
context =
(stats_t *) mongoc_apm_server_heartbeat_started_get_context (event);
context->heartbeat_started_events++;
printf ("%s heartbeat started\n",
mongoc_apm_server_heartbeat_started_get_host (event)->host_and_port); } static void server_heartbeat_succeeded (
const mongoc_apm_server_heartbeat_succeeded_t *event) {
stats_t *context;
char *reply;
context =
(stats_t *) mongoc_apm_server_heartbeat_succeeded_get_context (event);
context->heartbeat_succeeded_events++;
reply = bson_as_canonical_extended_json (
mongoc_apm_server_heartbeat_succeeded_get_reply (event), NULL);
printf (
"%s heartbeat succeeded: %s\n",
mongoc_apm_server_heartbeat_succeeded_get_host (event)->host_and_port,
reply);
bson_free (reply); } static void server_heartbeat_failed (const mongoc_apm_server_heartbeat_failed_t *event) {
stats_t *context;
bson_error_t error;
context = (stats_t *) mongoc_apm_server_heartbeat_failed_get_context (event);
context->heartbeat_failed_events++;
mongoc_apm_server_heartbeat_failed_get_error (event, &error);
printf ("%s heartbeat failed: %s\n",
mongoc_apm_server_heartbeat_failed_get_host (event)->host_and_port,
error.message); } int main (int argc, char *argv[]) {
mongoc_client_t *client;
mongoc_apm_callbacks_t *cbs;
stats_t stats = {0};
const char *uri_string =
"mongodb://127.0.0.1/?appname=sdam-monitoring-example";
mongoc_uri_t *uri;
bson_t cmd = BSON_INITIALIZER;
bson_t reply;
bson_error_t error;
mongoc_init ();
if (argc > 1) {
uri_string = argv[1];
}
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return EXIT_FAILURE;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return EXIT_FAILURE;
}
mongoc_client_set_error_api (client, 2);
cbs = mongoc_apm_callbacks_new ();
mongoc_apm_set_server_changed_cb (cbs, server_changed);
mongoc_apm_set_server_opening_cb (cbs, server_opening);
mongoc_apm_set_server_closed_cb (cbs, server_closed);
mongoc_apm_set_topology_changed_cb (cbs, topology_changed);
mongoc_apm_set_topology_opening_cb (cbs, topology_opening);
mongoc_apm_set_topology_closed_cb (cbs, topology_closed);
mongoc_apm_set_server_heartbeat_started_cb (cbs, server_heartbeat_started);
mongoc_apm_set_server_heartbeat_succeeded_cb (cbs,
server_heartbeat_succeeded);
mongoc_apm_set_server_heartbeat_failed_cb (cbs, server_heartbeat_failed);
mongoc_client_set_apm_callbacks (
client, cbs, (void *) &stats /* context pointer */);
/* the driver connects on demand to perform first operation */
BSON_APPEND_INT32 (&cmd, "buildinfo", 1);
mongoc_client_command_simple (client, "admin", &cmd, NULL, &reply, &error);
mongoc_uri_destroy (uri);
mongoc_client_destroy (client);
printf ("Events:\n"
" server changed: %d\n"
" server opening: %d\n"
" server closed: %d\n"
" topology changed: %d\n"
" topology opening: %d\n"
" topology closed: %d\n"
" heartbeat started: %d\n"
" heartbeat succeeded: %d\n"
" heartbeat failed: %d\n",
stats.server_changed_events,
stats.server_opening_events,
stats.server_closed_events,
stats.topology_changed_events,
stats.topology_opening_events,
stats.topology_closed_events,
stats.heartbeat_started_events,
stats.heartbeat_succeeded_events,
stats.heartbeat_failed_events);
bson_destroy (&cmd);
bson_destroy (&reply);
mongoc_apm_callbacks_destroy (cbs);
mongoc_cleanup ();
return EXIT_SUCCESS; }


Start a 3-node replica set on localhost with set name "rs" and start the program:


This example program prints something like:

topology opening
topology changed: Unknown -> ReplicaSetNoPrimary

secondary UNAVAILABLE
primary UNAVAILABLE server opening: localhost:27017 server opening: localhost:27018 localhost:27017 heartbeat started localhost:27018 heartbeat started localhost:27017 heartbeat succeeded: { ... reply ... } server changed: localhost:27017 Unknown -> RSPrimary server opening: localhost:27019 topology changed: ReplicaSetNoPrimary -> ReplicaSetWithPrimary
new servers:
RSPrimary localhost:27017
secondary UNAVAILABLE
primary AVAILABLE localhost:27019 heartbeat started localhost:27018 heartbeat succeeded: { ... reply ... } server changed: localhost:27018 Unknown -> RSSecondary topology changed: ReplicaSetWithPrimary -> ReplicaSetWithPrimary
previous servers:
RSPrimary localhost:27017
new servers:
RSPrimary localhost:27017
RSSecondary localhost:27018
secondary AVAILABLE
primary AVAILABLE localhost:27019 heartbeat succeeded: { ... reply ... } server changed: localhost:27019 Unknown -> RSSecondary topology changed: ReplicaSetWithPrimary -> ReplicaSetWithPrimary
previous servers:
RSPrimary localhost:27017
RSSecondary localhost:27018
new servers:
RSPrimary localhost:27017
RSSecondary localhost:27018
RSSecondary localhost:27019
secondary AVAILABLE
primary AVAILABLE topology closed Events:
server changed: 3
server opening: 3
server closed: 0
topology changed: 4
topology opening: 1
topology closed: 1
heartbeat started: 3
heartbeat succeeded: 3
heartbeat failed: 0


The driver connects to the mongods on ports 27017 and 27018, which were specified in the URI, and determines which is primary. It also discovers the third member, "localhost:27019", and adds it to the topology.

AUTHOR

MongoDB, Inc

COPYRIGHT

2017-present, MongoDB, Inc

August 31, 2022 1.23.0