'\" t .\" Man page generated from reStructuredText. . . .nr rst2man-indent-level 0 . .de1 rstReportMargin \\$1 \\n[an-margin] level \\n[rst2man-indent-level] level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] - \\n[rst2man-indent0] \\n[rst2man-indent1] \\n[rst2man-indent2] .. .de1 INDENT .\" .rstReportMargin pre: . RS \\$1 . nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin] . nr rst2man-indent-level +1 .\" .rstReportMargin post: .. .de UNINDENT . RE .\" indent \\n[an-margin] .\" old: \\n[rst2man-indent\\n[rst2man-indent-level]] .nr rst2man-indent-level -1 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .in \\n[rst2man-indent\\n[rst2man-indent-level]]u .. .TH "MONGOC_GUIDES" "3" "May 07, 2024" "1.27.1" "libmongoc" .SH CONFIGURING TLS .SS Configuration with URI options .sp Enable TLS by including \fBtls=true\fP in the URI. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new (\(dqmongodb://localhost:27017/\(dq); mongoc_uri_set_option_as_bool (uri, MONGOC_URI_TLS, true); mongoc_client_t *client = mongoc_client_new_from_uri (uri); .EE .UNINDENT .UNINDENT .sp The following URI options may be used to further configure TLS: .TS center; |l|l|l|. _ T{ Constant T} T{ Key T} T{ Description T} _ T{ MONGOC_URI_TLS T} T{ tls T} T{ {true|false}, indicating if TLS must be used. T} _ T{ MONGOC_URI_TLSCERTIFICATEKEYFILE T} T{ tlscertificatekeyfile T} T{ Path to PEM formatted Private Key, with its Public Certificate concatenated at the end. T} _ T{ MONGOC_URI_TLSCERTIFICATEKEYFILEPASSWORD T} T{ tlscertificatekeypassword T} T{ The password, if any, to use to unlock encrypted Private Key. T} _ T{ MONGOC_URI_TLSCAFILE T} T{ tlscafile T} T{ One, or a bundle of, Certificate Authorities whom should be considered to be trusted. T} _ T{ MONGOC_URI_TLSALLOWINVALIDCERTIFICATES T} T{ tlsallowinvalidcertificates T} T{ Accept and ignore certificate verification errors (e.g. untrusted issuer, expired, etc.) T} _ T{ MONGOC_URI_TLSALLOWINVALIDHOSTNAMES T} T{ tlsallowinvalidhostnames T} T{ Ignore hostname verification of the certificate (e.g. Man In The Middle, using valid certificate, but issued for another hostname) T} _ T{ MONGOC_URI_TLSINSECURE T} T{ tlsinsecure T} T{ {true|false}, indicating if insecure TLS options should be used. Currently this implies MONGOC_URI_TLSALLOWINVALIDCERTIFICATES and MONGOC_URI_TLSALLOWINVALIDHOSTNAMES. T} _ T{ MONGOC_URI_TLSDISABLECERTIFICATEREVOCATIONCHECK T} T{ tlsdisablecertificaterevocationcheck T} T{ {true|false}, indicates if revocation checking (CRL / OCSP) should be disabled. T} _ T{ MONGOC_URI_TLSDISABLEOCSPENDPOINTCHECK T} T{ tlsdisableocspendpointcheck T} T{ {true|false}, indicates if OCSP responder endpoints should not be requested when an OCSP response is not stapled. T} _ .TE .SS Configuration with mongoc_ssl_opt_t .sp Alternatively, the \fI\%mongoc_ssl_opt_t\fP struct may be used to configure TLS with \fI\%mongoc_client_set_ssl_opts()\fP or \fI\%mongoc_client_pool_set_ssl_opts()\fP\&. Most of the configurable options can be set using the \fI\%Connection String URI\fP\&. .TS center; |l|l|. _ T{ \fBmongoc_ssl_opt_t key\fP T} T{ \fBURI key\fP T} _ T{ pem_file T} T{ tlsClientCertificateKeyFile T} _ T{ pem_pwd T} T{ tlsClientCertificateKeyPassword T} _ T{ ca_file T} T{ tlsCAFile T} _ T{ weak_cert_validation T} T{ tlsAllowInvalidCertificates T} _ T{ allow_invalid_hostname T} T{ tlsAllowInvalidHostnames T} _ .TE .sp The only exclusions are \fBcrl_file\fP and \fBca_dir\fP\&. Those may only be set with \fI\%mongoc_ssl_opt_t\fP\&. .SS Client Authentication .sp When MongoDB is started with TLS enabled, it will by default require the client to provide a client certificate issued by a certificate authority specified by \fB\-\-tlsCAFile\fP, or an authority trusted by the native certificate store in use on the server. .sp To provide the client certificate, set the \fBtlsCertificateKeyFile\fP in the URI to a PEM armored certificate file. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new (\(dqmongodb://localhost:27017/\(dq); mongoc_uri_set_option_as_bool (uri, MONGOC_URI_TLS, true); mongoc_uri_set_option_as_utf8 (uri, MONGOC_URI_TLSCERTIFICATEKEYFILE, \(dq/path/to/client\-certificate.pem\(dq); mongoc_client_t *client = mongoc_client_new_from_uri (uri); .EE .UNINDENT .UNINDENT .SS Server Certificate Verification .sp The MongoDB C Driver will automatically verify the validity of the server certificate, such as issued by configured Certificate Authority, hostname validation, and expiration. .sp To overwrite this behavior, it is possible to disable hostname validation, OCSP endpoint revocation checking, revocation checking entirely, and allow invalid certificates. .sp This behavior is controlled using the \fBtlsAllowInvalidHostnames\fP, \fBtlsDisableOCSPEndpointCheck\fP, \fBtlsDisableCertificateRevocationCheck\fP, and \fBtlsAllowInvalidCertificates\fP options respectively. By default, all are set to \fBfalse\fP\&. .sp It is not recommended to change these defaults as it exposes the client to \fIMan In The Middle\fP attacks (when \fBtlsAllowInvalidHostnames\fP is set), invalid certificates (when \fBtlsAllowInvalidCertificates\fP is set), or potentially revoked certificates (when \fBtlsDisableOCSPEndpointCheck\fP or \fBtlsDisableCertificateRevocationCheck\fP are set). .SS Supported Libraries .sp By default, libmongoc will attempt to find a supported TLS library and enable TLS support. This is controlled by the cmake flag \fBENABLE_SSL\fP, which is set to \fBAUTO\fP by default. Valid values are: .INDENT 0.0 .IP \(bu 2 \fBAUTO\fP the default behavior. Link to the system\(aqs native TLS library, or attempt to find OpenSSL. .IP \(bu 2 \fBDARWIN\fP link to Secure Transport, the native TLS library on macOS. .IP \(bu 2 \fBWINDOWS\fP link to Secure Channel, the native TLS library on Windows. .IP \(bu 2 \fBOPENSSL\fP link to OpenSSL (libssl). An optional install path may be specified with \fBOPENSSL_ROOT\fP\&. .IP \(bu 2 \fBLIBRESSL\fP link to LibreSSL\(aqs libtls. (LibreSSL\(aqs compatible libssl may be linked to by setting \fBOPENSSL\fP). .IP \(bu 2 \fBOFF\fP disable TLS support. .UNINDENT .SS OpenSSL .sp The MongoDB C Driver uses OpenSSL, if available, on Linux and Unix platforms (besides macOS). Industry best practices and some regulations require the use of TLS 1.1 or newer, which requires at least OpenSSL 1.0.1. Check your OpenSSL version like so: .INDENT 0.0 .INDENT 3.5 .sp .EX $ openssl version .EE .UNINDENT .UNINDENT .sp Ensure your system\(aqs OpenSSL is a recent version (at least 1.0.1), or install a recent version in a non\-system path and build against it with: .INDENT 0.0 .INDENT 3.5 .sp .EX cmake \-DOPENSSL_ROOT_DIR=/absolute/path/to/openssl .EE .UNINDENT .UNINDENT .sp When compiled against OpenSSL, the driver will attempt to load the system default certificate store, as configured by the distribution. That can be overridden by setting the \fBtlsCAFile\fP URI option or with the fields \fBca_file\fP and \fBca_dir\fP in the \fI\%mongoc_ssl_opt_t\fP\&. .sp The Online Certificate Status Protocol (OCSP) (see \fI\%RFC 6960\fP) is fully supported when using OpenSSL 1.0.1+ with the following notes: .INDENT 0.0 .IP \(bu 2 When a \fBcrl_file\fP is set with \fI\%mongoc_ssl_opt_t\fP, and the \fBcrl_file\fP revokes the server\(aqs certificate, the certificate is considered revoked (even if the certificate has a valid stapled OCSP response) .UNINDENT .SS LibreSSL / libtls .sp The MongoDB C Driver supports LibreSSL through the use of OpenSSL compatibility checks when configured to compile against \fBopenssl\fP\&. It also supports the new \fBlibtls\fP library when configured to build against \fBlibressl\fP\&. .sp When compiled against the Windows native libraries, the \fBcrl_file\fP option of a \fI\%mongoc_ssl_opt_t\fP is not supported, and will issue an error if used. .sp Setting \fBtlsDisableOCSPEndpointCheck\fP and \fBtlsDisableCertificateRevocationCheck\fP has no effect. .sp The Online Certificate Status Protocol (OCSP) (see \fI\%RFC 6960\fP) is partially supported with the following notes: .INDENT 0.0 .IP \(bu 2 The Must\-Staple extension (see \fI\%RFC 7633\fP) is ignored. Connection may continue if a Must\-Staple certificate is presented with no stapled response (unless the client receives a revoked response from an OCSP responder). .IP \(bu 2 Connection will continue if a Must\-Staple certificate is presented without a stapled response and the OCSP responder is down. .UNINDENT .SS Native TLS Support on Windows (Secure Channel) .sp The MongoDB C Driver supports the Windows native TLS library (Secure Channel, or SChannel), and its native crypto library (Cryptography API: Next Generation, or CNG). .sp When compiled against the Windows native libraries, the \fBca_dir\fP option of a \fI\%mongoc_ssl_opt_t\fP is not supported, and will issue an error if used. .sp Encrypted PEM files (e.g., setting \fBtlsCertificateKeyPassword\fP) are also not supported, and will result in error when attempting to load them. .sp When \fBtlsCAFile\fP is set, the driver will only allow server certificates issued by the authority (or authorities) provided. When no \fBtlsCAFile\fP is set, the driver will look up the Certificate Authority using the \fBSystem Local Machine Root\fP certificate store to confirm the provided certificate. .sp When \fBcrl_file\fP is set with \fI\%mongoc_ssl_opt_t\fP, the driver will import the revocation list to the \fBSystem Local Machine Root\fP certificate store. .sp Setting \fBtlsDisableOCSPEndpointCheck\fP has no effect. .sp The Online Certificate Status Protocol (OCSP) (see \fI\%RFC 6960\fP) is partially supported with the following notes: .INDENT 0.0 .IP \(bu 2 The Must\-Staple extension (see \fI\%RFC 7633\fP) is ignored. Connection may continue if a Must\-Staple certificate is presented with no stapled response (unless the client receives a revoked response from an OCSP responder). .IP \(bu 2 When a \fBcrl_file\fP is set with \fI\%mongoc_ssl_opt_t\fP, and the \fBcrl_file\fP revokes the server\(aqs certificate, the OCSP response takes precedence. E.g. if the server presents a certificate with a valid stapled OCSP response, the certificate is considered valid even if the \fBcrl_file\fP marks it as revoked. .IP \(bu 2 Connection will continue if a Must\-Staple certificate is presented without a stapled response and the OCSP responder is down. .UNINDENT .SS Native TLS Support on macOS / Darwin (Secure Transport) .sp The MongoDB C Driver supports the Darwin (OS X, macOS, iOS, etc.) native TLS library (Secure Transport), and its native crypto library (Common Crypto, or CC). .sp When compiled against Secure Transport, the \fBca_dir\fP and \fBcrl_file\fP options of a \fI\%mongoc_ssl_opt_t\fP are not supported. An error is issued if either are used. .sp When \fBtlsCAFile\fP is set, the driver will only allow server certificates issued by the authority (or authorities) provided. When no \fBtlsCAFile\fP is set, the driver will use the Certificate Authorities in the currently unlocked keychains. .sp Setting \fBtlsDisableOCSPEndpointCheck\fP and \fBtlsDisableCertificateRevocationCheck\fP has no effect. .sp The Online Certificate Status Protocol (OCSP) (see \fI\%RFC 6960\fP) is partially supported with the following notes. .INDENT 0.0 .IP \(bu 2 The Must\-Staple extension (see \fI\%RFC 7633\fP) is ignored. Connection may continue if a Must\-Staple certificate is presented with no stapled response (unless the client receives a revoked response from an OCSP responder). .IP \(bu 2 Connection will continue if a Must\-Staple certificate is presented without a stapled response and the OCSP responder is down. .UNINDENT .SH COMMON TASKS .sp Drivers for some other languages provide helper functions to perform certain common tasks. In the C Driver we must explicitly build commands to send to the server. .SS Setup .sp First we\(aqll write some code to insert sample data: .sp doc\-common\-insert.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* Don\(aqt try to compile this file on its own. It\(aqs meant to be #included by example code */ /* Insert some sample data */ bool insert_data (mongoc_collection_t *collection) { mongoc_bulk_operation_t *bulk; enum N { ndocs = 4 }; bson_t *docs[ndocs]; bson_error_t error; int i = 0; bool ret; bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); docs[0] = BCON_NEW (\(dqx\(dq, BCON_DOUBLE (1.0), \(dqtags\(dq, \(dq[\(dq, \(dqdog\(dq, \(dqcat\(dq, \(dq]\(dq); docs[1] = BCON_NEW (\(dqx\(dq, BCON_DOUBLE (2.0), \(dqtags\(dq, \(dq[\(dq, \(dqcat\(dq, \(dq]\(dq); docs[2] = BCON_NEW (\(dqx\(dq, BCON_DOUBLE (2.0), \(dqtags\(dq, \(dq[\(dq, \(dqmouse\(dq, \(dqcat\(dq, \(dqdog\(dq, \(dq]\(dq); docs[3] = BCON_NEW (\(dqx\(dq, BCON_DOUBLE (3.0), \(dqtags\(dq, \(dq[\(dq, \(dq]\(dq); for (i = 0; i < ndocs; i++) { mongoc_bulk_operation_insert (bulk, docs[i]); bson_destroy (docs[i]); docs[i] = NULL; } ret = mongoc_bulk_operation_execute (bulk, NULL, &error); if (!ret) { fprintf (stderr, \(dqError inserting data: %s\en\(dq, error.message); } mongoc_bulk_operation_destroy (bulk); return ret; } /* A helper which we\(aqll use a lot later on */ void print_res (const bson_t *reply) { char *str; BSON_ASSERT (reply); str = bson_as_canonical_extended_json (reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } .EE .UNINDENT .UNINDENT .SS \(dqexplain\(dq Command .sp This is how to use the \fBexplain\fP command in MongoDB 3.2+: .sp explain.c .INDENT 0.0 .INDENT 3.5 .sp .EX bool explain (mongoc_collection_t *collection) { bson_t *command; bson_t reply; bson_error_t error; bool res; command = BCON_NEW (\(dqexplain\(dq, \(dq{\(dq, \(dqfind\(dq, BCON_UTF8 (COLLECTION_NAME), \(dqfilter\(dq, \(dq{\(dq, \(dqx\(dq, BCON_INT32 (1), \(dq}\(dq, \(dq}\(dq); res = mongoc_collection_command_simple (collection, command, NULL, &reply, &error); if (!res) { fprintf (stderr, \(dqError with explain: %s\en\(dq, error.message); goto cleanup; } /* Do something with the reply */ print_res (&reply); cleanup: bson_destroy (&reply); bson_destroy (command); return res; } .EE .UNINDENT .UNINDENT .SS Running the Examples .sp common\-operations.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* * Copyright 2016 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the \(dqLicense\(dq); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE\-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an \(dqAS IS\(dq BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include #include const char *COLLECTION_NAME = \(dqthings\(dq; #include \(dq../doc\-common\-insert.c\(dq #include \(dqexplain.c\(dq int main (int argc, char *argv[]) { mongoc_database_t *database = NULL; mongoc_client_t *client = NULL; mongoc_collection_t *collection = NULL; mongoc_uri_t *uri = NULL; bson_error_t error; char *host_and_port; int res = 0; if (argc < 2 || argc > 3) { fprintf (stderr, \(dqusage: %s MONGOD\-1\-CONNECTION\-STRING \(dq \(dq[MONGOD\-2\-HOST\-NAME:MONGOD\-2\-PORT]\en\(dq, argv[0]); fprintf (stderr, \(dqMONGOD\-1\-CONNECTION\-STRING can be \(dq \(dqof the following forms:\en\(dq); fprintf (stderr, \(dqlocalhost\et\et\et\etlocal machine\en\(dq); fprintf (stderr, \(dqlocalhost:27018\et\et\et\etlocal machine on port 27018\en\(dq); fprintf (stderr, \(dqmongodb://user:pass@localhost:27017\et\(dq \(dqlocal machine on port 27017, and authenticate with username \(dq \(dquser and password pass\en\(dq); return EXIT_FAILURE; } mongoc_init (); if (strncmp (argv[1], \(dqmongodb://\(dq, 10) == 0) { host_and_port = bson_strdup (argv[1]); } else { host_and_port = bson_strdup_printf (\(dqmongodb://%s\(dq, argv[1]); } uri = mongoc_uri_new_with_error (host_and_port, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, host_and_port, error.message); res = EXIT_FAILURE; goto cleanup; } client = mongoc_client_new_from_uri (uri); if (!client) { res = EXIT_FAILURE; goto cleanup; } mongoc_client_set_error_api (client, 2); database = mongoc_client_get_database (client, \(dqtest\(dq); collection = mongoc_database_get_collection (database, COLLECTION_NAME); printf (\(dqInserting data\en\(dq); if (!insert_data (collection)) { res = EXIT_FAILURE; goto cleanup; } printf (\(dqexplain\en\(dq); if (!explain (collection)) { res = EXIT_FAILURE; goto cleanup; } cleanup: if (collection) { mongoc_collection_destroy (collection); } if (database) { mongoc_database_destroy (database); } if (client) { mongoc_client_destroy (client); } if (uri) { mongoc_uri_destroy (uri); } bson_free (host_and_port); mongoc_cleanup (); return res; } .EE .UNINDENT .UNINDENT .sp First launch two separate instances of mongod (must be done from separate shells): .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongod .EE .UNINDENT .UNINDENT .INDENT 0.0 .INDENT 3.5 .sp .EX $ mkdir /tmp/db2 $ mongod \-\-dbpath /tmp/db2 \-\-port 27018 # second instance .EE .UNINDENT .UNINDENT .sp Now compile and run the example program: .INDENT 0.0 .INDENT 3.5 .sp .EX $ cd examples/common_operations/$ gcc \-Wall \-o example common\-operations.c $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0)$ ./example localhost:27017 localhost:27018 Inserting data explain { \(dqexecutionStats\(dq : { \(dqallPlansExecution\(dq : [], \(dqexecutionStages\(dq : { \(dqadvanced\(dq : 19, \(dqdirection\(dq : \(dqforward\(dq , \(dqdocsExamined\(dq : 76, \(dqexecutionTimeMillisEstimate\(dq : 0, \(dqfilter\(dq : { \(dqx\(dq : { \(dq$eq\(dq : 1 } }, \(dqinvalidates\(dq : 0, \(dqisEOF\(dq : 1, \(dqnReturned\(dq : 19, \(dqneedTime\(dq : 58, \(dqneedYield\(dq : 0, \(dqrestoreState\(dq : 0, \(dqsaveState\(dq : 0, \(dqstage\(dq : \(dqCOLLSCAN\(dq , \(dqworks\(dq : 78 }, \(dqexecutionSuccess\(dq : true, \(dqexecutionTimeMillis\(dq : 0, \(dqnReturned\(dq : 19, \(dqtotalDocsExamined\(dq : 76, \(dqtotalKeysExamined\(dq : 0 }, \(dqok\(dq : 1, \(dqqueryPlanner\(dq : { \(dqindexFilterSet\(dq : false, \(dqnamespace\(dq : \(dqtest.things\(dq, \(dqparsedQuery\(dq : { \(dqx\(dq : { \(dq$eq\(dq : 1 } }, \(dqplannerVersion\(dq : 1, \(dqrejectedPlans\(dq : [], \(dqwinningPlan\(dq : { \(dqdirection\(dq : \(dqforward\(dq , \(dqfilter\(dq : { \(dqx\(dq : { \(dq$eq\(dq : 1 } }, \(dqstage\(dq : \(dqCOLLSCAN\(dq } }, \(dqserverInfo\(dq : { \(dqgitVersion\(dq : \(dq05552b562c7a0b3143a729aaa0838e558dc49b25\(dq , \(dqhost\(dq : \(dqMacBook\-Pro\-57.local\(dq, \(dqport\(dq : 27017, \(dqversion\(dq : \(dq3.2.6\(dq } } .EE .UNINDENT .UNINDENT .SH ADVANCED CONNECTIONS .sp The following guide contains information specific to certain types of MongoDB configurations. .sp For an example of connecting to a simple standalone server, see the \fI\%Tutorial\fP\&. To establish a connection with authentication options enabled, see the \fI\%Authentication\fP page. To see an example of a connection with data compression, see the \fI\%Data Compression\fP page. .SS Connecting to a Replica Set .sp Connecting to a \fI\%replica set\fP is much like connecting to a standalone MongoDB server. Simply specify the replica set name using the \fB?replicaSet=myreplset\fP URI option. .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include int main (int argc, char *argv[]) { mongoc_client_t *client; mongoc_init (); /* Create our MongoDB Client */ client = mongoc_client_new ( \(dqmongodb://host01:27017,host02:27017,host03:27017/?replicaSet=myreplset\(dq); /* Do some work */ /* TODO */ /* Clean up */ mongoc_client_destroy (client); mongoc_cleanup (); return 0; } .EE .UNINDENT .UNINDENT .sp \fBTIP:\fP .INDENT 0.0 .INDENT 3.5 Multiple hostnames can be specified in the MongoDB connection string URI, with a comma separating hosts in the seed list. .sp It is recommended to use a seed list of members of the replica set to allow the driver to connect to any node. .UNINDENT .UNINDENT .SS Connecting to a Sharded Cluster .sp To connect to a \fI\%sharded cluster\fP, specify the \fBmongos\fP nodes the client should connect to. The C Driver will automatically detect that it has connected to a \fBmongos\fP sharding server. .sp If more than one hostname is specified, a seed list will be created to attempt failover between the \fBmongos\fP instances. .sp \fBWARNING:\fP .INDENT 0.0 .INDENT 3.5 Specifying the \fBreplicaSet\fP parameter when connecting to a \fBmongos\fP sharding server is invalid. .UNINDENT .UNINDENT .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include int main (int argc, char *argv[]) { mongoc_client_t *client; mongoc_init (); /* Create our MongoDB Client */ client = mongoc_client_new (\(dqmongodb://myshard01:27017/\(dq); /* Do something with client ... */ /* Free the client */ mongoc_client_destroy (client); mongoc_cleanup (); return 0; } .EE .UNINDENT .UNINDENT .SS Connecting to an IPv6 Address .sp The MongoDB C Driver will automatically resolve IPv6 addresses from host names. However, to specify an IPv6 address directly, wrap the address in \fB[]\fP\&. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new (\(dqmongodb://[::1]:27017\(dq); .EE .UNINDENT .UNINDENT .SS Connecting with IPv4 and IPv6 .sp If connecting to a hostname that has both IPv4 and IPv6 DNS records, the behavior follows \fI\%RFC\-6555\fP\&. A connection to the IPv6 address is attempted first. If IPv6 fails, then a connection is attempted to the IPv4 address. If the connection attempt to IPv6 does not complete within 250ms, then IPv4 is tried in parallel. Whichever succeeds connection first cancels the other. The successful DNS result is cached for 10 minutes. .sp As a consequence, attempts to connect to a mongod only listening on IPv4 may be delayed if there are both A (IPv4) and AAAA (IPv6) DNS records associated with the host. .sp To avoid a delay, configure hostnames to match the MongoDB configuration. That is, only create an A record if the mongod is only listening on IPv4. .SS Connecting to a UNIX Domain Socket .sp On UNIX\-like systems, the C Driver can connect directly to a MongoDB server using a UNIX domain socket. Pass the URL\-encoded path to the socket, which \fImust\fP be suffixed with \fB\&.sock\fP\&. For example, to connect to a domain socket at \fB/tmp/mongodb\-27017.sock\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new (\(dqmongodb://%2Ftmp%2Fmongodb\-27017.sock\(dq); .EE .UNINDENT .UNINDENT .sp Include username and password like so: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new (\(dqmongodb://user:pass@%2Ftmp%2Fmongodb\-27017.sock\(dq); .EE .UNINDENT .UNINDENT .SS Connecting to a server over TLS .sp These are instructions for configuring TLS/SSL connections. .sp To run a server locally (on port 27017, for example): .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongod \-\-port 27017 \-\-tlsMode requireTLS \-\-tlsCertificateKeyFile server.pem \-\-tlsCAFile ca.pem .EE .UNINDENT .UNINDENT .sp Add \fB/?tls=true\fP to the end of a client URI. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client = NULL; client = mongoc_client_new (\(dqmongodb://localhost:27017/?tls=true\(dq); .EE .UNINDENT .UNINDENT .sp MongoDB requires client certificates by default, unless the \fB\-\-tlsAllowConnectionsWithoutCertificates\fP is provided. The C Driver can be configured to present a client certificate using the URI option \fBtlsCertificateKeyFile\fP, which may be referenced through the constant \fBMONGOC_URI_TLSCERTIFICATEKEYFILE\fP\&. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client = NULL; mongoc_uri_t *uri = mongoc_uri_new (\(dqmongodb://localhost:27017/?tls=true\(dq); mongoc_uri_set_option_as_utf8 (uri, MONGOC_URI_TLSCERTIFICATEKEYFILE, \(dqclient.pem\(dq); client = mongoc_client_new_from_uri (uri); .EE .UNINDENT .UNINDENT .sp The client certificate provided by \fBtlsCertificateKeyFile\fP must be issued by one of the server trusted Certificate Authorities listed in \fB\-\-tlsCAFile\fP, or issued by a CA in the native certificate store on the server when omitted. .sp See \fI\%Configuring TLS\fP for more information on the various TLS related options. .SS Compressing data to and from MongoDB .sp This content has been relocated to the \fI\%Data Compression\fP page. .SS Additional Connection Options .sp The full list of connection options can be found in the \fI\%mongoc_uri_t\fP docs. .sp Certain socket/connection related options are not configurable: .TS center; |l|l|l|. _ T{ Option T} T{ Description T} T{ Value T} _ T{ SO_KEEPALIVE T} T{ TCP Keep Alive T} T{ Enabled T} _ T{ TCP_KEEPIDLE T} T{ How long a connection needs to remain idle before TCP starts sending keepalive probes T} T{ 120 seconds T} _ T{ TCP_KEEPINTVL T} T{ The time in seconds between TCP probes T} T{ 10 seconds T} _ T{ TCP_KEEPCNT T} T{ How many probes to send, without acknowledgement, before dropping the connection T} T{ 9 probes T} _ T{ TCP_NODELAY T} T{ Send packets as soon as possible or buffer small packets (Nagle algorithm) T} T{ Enabled (no buffering) T} _ .TE .SH CONNECTION POOLING .sp The MongoDB C driver has two connection modes: single\-threaded and pooled. Single\-threaded mode is optimized for embedding the driver within languages like PHP. Multi\-threaded programs should use pooled mode: this mode minimizes the total connection count, and in pooled mode background threads monitor the MongoDB server topology, so the program need not block to scan it. .SS Single Mode .sp In single mode, your program creates a \fI\%mongoc_client_t\fP directly: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client = mongoc_client_new ( \(dqmongodb://hostA,hostB/?replicaSet=my_rs\(dq); .EE .UNINDENT .UNINDENT .sp The client connects on demand when your program first uses it for a MongoDB operation. Using a non\-blocking socket per server, it begins a check on each server concurrently, and uses the asynchronous \fBpoll\fP or \fBselect\fP function to receive events from the sockets, until all have responded or timed out. Put another way, in single\-threaded mode the C Driver fans out to begin all checks concurrently, then fans in once all checks have completed or timed out. Once the scan completes, the client executes your program\(aqs operation and returns. .sp In single mode, the client re\-scans the server topology roughly once per minute. If more than a minute has elapsed since the previous scan, the next operation on the client will block while the client completes its scan. This interval is configurable with \fBheartbeatFrequencyMS\fP in the connection string. (See \fI\%mongoc_uri_t\fP\&.) .sp A single client opens one connection per server in your topology: these connections are used both for scanning the topology and performing normal operations. .SS Pooled Mode .sp To activate pooled mode, create a \fI\%mongoc_client_pool_t\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new ( \(dqmongodb://hostA,hostB/?replicaSet=my_rs\(dq); mongoc_client_pool_t *pool = mongoc_client_pool_new (uri); .EE .UNINDENT .UNINDENT .sp When your program first calls \fI\%mongoc_client_pool_pop()\fP, the pool launches monitoring threads in the background. Monitoring threads independently connect to all servers in the connection string. As monitoring threads receive hello responses from the servers, they update the shared view of the server topology. Additional monitoring threads and connections are created as new servers are discovered. Monitoring threads are terminated when servers are removed from the shared view of the server topology. .sp Each thread that executes MongoDB operations must check out a client from the pool: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client = mongoc_client_pool_pop (pool); /* use the client for operations ... */ mongoc_client_pool_push (pool, client); .EE .UNINDENT .UNINDENT .sp The \fI\%mongoc_client_t\fP object is not thread\-safe, only the \fI\%mongoc_client_pool_t\fP is. .sp When the driver is in pooled mode, your program\(aqs operations are unblocked as soon as monitoring discovers a usable server. For example, if a thread in your program is waiting to execute an \(dqinsert\(dq on the primary, it is unblocked as soon as the primary is discovered, rather than waiting for all secondaries to be checked as well. .sp The pool opens one connection per server for monitoring, and each client opens its own connection to each server it uses for application operations. Background monitoring threads re\-scan servers independently roughly every 10 seconds. This interval is configurable with \fBheartbeatFrequencyMS\fP in the connection string. (See \fI\%mongoc_uri_t\fP\&.) .sp The connection string can also specify \fBwaitQueueTimeoutMS\fP to limit the time that \fI\%mongoc_client_pool_pop()\fP will wait for a client from the pool. (See \fI\%mongoc_uri_t\fP\&.) If \fBwaitQueueTimeoutMS\fP is specified, then it is necessary to confirm that a client was actually returned: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_uri_t *uri = mongoc_uri_new ( \(dqmongodb://hostA,hostB/?replicaSet=my_rs&waitQueueTimeoutMS=1000\(dq); mongoc_client_pool_t *pool = mongoc_client_pool_new (uri); mongoc_client_t *client = mongoc_client_pool_pop (pool); if (client) { /* use the client for operations ... */ mongoc_client_pool_push (pool, client); } else { /* take appropriate action for a timeout */ } .EE .UNINDENT .UNINDENT .sp See \fI\%Connection Pool Options\fP to configure pool size and behavior, and see \fI\%mongoc_client_pool_t\fP for an extended example of a multi\-threaded program that uses the driver in pooled mode. .SH DATA COMPRESSION .sp The following guide explains how data compression support works between the MongoDB server and client. It also shows an example of how to connect to a server with data compression. .SS Compressing data to and from MongoDB .sp MongoDB 3.4 added Snappy compression support, while zlib compression was added in 3.6, and zstd compression in 4.2. To enable compression support the client must be configured with which compressors to use: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_client_t *client = NULL; client = mongoc_client_new (\(dqmongodb://localhost:27017/?compressors=snappy,zlib,zstd\(dq); .EE .UNINDENT .UNINDENT .sp The \fBcompressors\fP option specifies the priority order of compressors the client wants to use. Messages are compressed if the client and server share any compressors in common. .sp Note that the compressor used by the server might not be the same compressor as the client used. For example, if the client uses the connection string \fBcompressors=zlib,snappy\fP the client will use \fBzlib\fP compression to send data (if possible), but the server might still reply using \fBsnappy\fP, depending on how the server was configured. .sp The driver must be built with zlib and/or snappy and/or zstd support to enable compression support, any unknown (or not compiled in) compressor value will be ignored. .SH CURSORS .SS Handling Cursor Failures .sp Cursors exist on a MongoDB server. However, the \fBmongoc_cursor_t\fP structure gives the local process a handle to the cursor. It is possible for errors to occur on the server while iterating a cursor on the client. Even a network partition may occur. This means that applications should be robust in handling cursor failures. .sp While iterating cursors, you should check to see if an error has occurred. See the following example for how to robustly check for errors. .INDENT 0.0 .INDENT 3.5 .sp .EX static void print_all_documents (mongoc_collection_t *collection) { mongoc_cursor_t *cursor; const bson_t *doc; bson_error_t error; bson_t query = BSON_INITIALIZER; char *str; cursor = mongoc_collection_find_with_opts (collection, query, NULL, NULL); while (mongoc_cursor_next (cursor, &doc)) { str = bson_as_canonical_extended_json (doc, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } if (mongoc_cursor_error (cursor, &error)) { fprintf (stderr, \(dqFailed to iterate all documents: %s\en\(dq, error.message); } mongoc_cursor_destroy (cursor); } .EE .UNINDENT .UNINDENT .SS Destroying Server\-Side Cursors .sp The MongoDB C driver will automatically destroy a server\-side cursor when \fI\%mongoc_cursor_destroy()\fP is called. Failure to call this function when done with a cursor will leak memory client side as well as consume extra memory server side. If the cursor was configured to never timeout, it will become a memory leak on the server. .SS Tailable Cursors .sp Tailable cursors are cursors that remain open even after they\(aqve returned a final result. This way, if more documents are added to a collection (i.e., to the cursor\(aqs result set), then you can continue to call \fI\%mongoc_cursor_next()\fP to retrieve those additional results. .sp Here\(aqs a complete test case that demonstrates the use of tailable cursors. .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 Tailable cursors are for capped collections only. .UNINDENT .UNINDENT .sp An example to tail the oplog from a replica set. .sp mongoc\-tail.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include #include #ifdef _WIN32 #define sleep(_n) Sleep ((_n) * 1000) #endif static void print_bson (const bson_t *b) { char *str; str = bson_as_canonical_extended_json (b, NULL); fprintf (stdout, \(dq%s\en\(dq, str); bson_free (str); } static mongoc_cursor_t * query_collection (mongoc_collection_t *collection, uint32_t last_time) { mongoc_cursor_t *cursor; bson_t query; bson_t gt; bson_t opts; BSON_ASSERT (collection); bson_init (&query); BSON_APPEND_DOCUMENT_BEGIN (&query, \(dqts\(dq, >); BSON_APPEND_TIMESTAMP (>, \(dq$gt\(dq, last_time, 0); bson_append_document_end (&query, >); bson_init (&opts); BSON_APPEND_BOOL (&opts, \(dqtailable\(dq, true); BSON_APPEND_BOOL (&opts, \(dqawaitData\(dq, true); cursor = mongoc_collection_find_with_opts (collection, &query, &opts, NULL); bson_destroy (&query); bson_destroy (&opts); return cursor; } static void tail_collection (mongoc_collection_t *collection) { mongoc_cursor_t *cursor; uint32_t last_time; const bson_t *doc; bson_error_t error; bson_iter_t iter; BSON_ASSERT (collection); last_time = (uint32_t) time (NULL); while (true) { cursor = query_collection (collection, last_time); while (!mongoc_cursor_error (cursor, &error) && mongoc_cursor_more (cursor)) { if (mongoc_cursor_next (cursor, &doc)) { if (bson_iter_init_find (&iter, doc, \(dqts\(dq) && BSON_ITER_HOLDS_TIMESTAMP (&iter)) { bson_iter_timestamp (&iter, &last_time, NULL); } print_bson (doc); } } if (mongoc_cursor_error (cursor, &error)) { if (error.domain == MONGOC_ERROR_SERVER) { fprintf (stderr, \(dq%s\en\(dq, error.message); exit (1); } } mongoc_cursor_destroy (cursor); sleep (1); } } int main (int argc, char *argv[]) { mongoc_collection_t *collection; mongoc_client_t *client; mongoc_uri_t *uri; bson_error_t error; if (argc != 2) { fprintf (stderr, \(dqusage: %s MONGO_URI\en\(dq, argv[0]); return EXIT_FAILURE; } mongoc_init (); uri = mongoc_uri_new_with_error (argv[1], &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, argv[1], error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqlocal\(dq, \(dqoplog.rs\(dq); tail_collection (collection); mongoc_collection_destroy (collection); mongoc_uri_destroy (uri); mongoc_client_destroy (client); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Let\(aqs compile and run this example against a replica set to see updates as they are made. .INDENT 0.0 .INDENT 3.5 .sp .EX $ gcc \-Wall \-o mongoc\-tail mongoc\-tail.c $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0) $ ./mongoc\-tail mongodb://example.com/?replicaSet=myReplSet { \(dqh\(dq : \-8458503739429355503, \(dqns\(dq : \(dqtest.test\(dq, \(dqo\(dq : { \(dq_id\(dq : { \(dq$oid\(dq : \(dq5372ab0a25164be923d10d50\(dq } }, \(dqop\(dq : \(dqi\(dq, \(dqts\(dq : { \(dq$timestamp\(dq : { \(dqi\(dq : 1, \(dqt\(dq : 1400023818 } }, \(dqv\(dq : 2 } .EE .UNINDENT .UNINDENT .sp The line of output is a sample from performing \fBdb.test.insert({})\fP from the mongo shell on the replica set. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf \fI\%mongoc_cursor_set_max_await_time_ms()\fP\&. .fi .sp .UNINDENT .UNINDENT .SH BULK WRITE OPERATIONS .sp This tutorial explains how to take advantage of MongoDB C driver bulk write operation features. Executing write operations in batches reduces the number of network round trips, increasing write throughput. .SS Bulk Insert .sp First we need to fetch a bulk operation handle from the \fI\%mongoc_collection_t\fP\&. .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_bulk_operation_t *bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); .EE .UNINDENT .UNINDENT .sp We can now start inserting documents to the bulk operation. These will be buffered until we execute the operation. .sp The bulk operation will coalesce insertions as a single batch for each consecutive call to \fI\%mongoc_bulk_operation_insert()\fP\&. This creates a pipelined effect when possible. .sp To execute the bulk operation and receive the result we call \fI\%mongoc_bulk_operation_execute()\fP\&. .sp bulk1.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include static void bulk1 (mongoc_collection_t *collection) { mongoc_bulk_operation_t *bulk; bson_error_t error; bson_t *doc; bson_t reply; char *str; bool ret; int i; bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); for (i = 0; i < 10000; i++) { doc = BCON_NEW (\(dqi\(dq, BCON_INT32 (i)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); } ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { fprintf (stderr, \(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); mongoc_bulk_operation_destroy (bulk); } int main (void) { mongoc_client_t *client; mongoc_collection_t *collection; const char *uri_string = \(dqmongodb://localhost/?appname=bulk1\-example\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqtest\(dq, \(dqtest\(dq); bulk1 (collection); mongoc_uri_destroy (uri); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Example \fBreply\fP document: .INDENT 0.0 .INDENT 3.5 .sp .EX {\(dqnInserted\(dq : 10000, \(dqnMatched\(dq : 0, \(dqnModified\(dq : 0, \(dqnRemoved\(dq : 0, \(dqnUpserted\(dq : 0, \(dqwriteErrors\(dq : [] \(dqwriteConcernErrors\(dq : [] } .EE .UNINDENT .UNINDENT .SS Mixed Bulk Write Operations .sp MongoDB C driver also supports executing mixed bulk write operations. A batch of insert, update, and remove operations can be executed together using the bulk write operations API. .SS Ordered Bulk Write Operations .sp Ordered bulk write operations are batched and sent to the server in the order provided for serial execution. The \fBreply\fP document describes the type and count of operations performed. .sp bulk2.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include static void bulk2 (mongoc_collection_t *collection) { mongoc_bulk_operation_t *bulk; bson_error_t error; bson_t *query; bson_t *doc; bson_t *opts; bson_t reply; char *str; bool ret; int i; bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); /* Remove everything */ query = bson_new (); mongoc_bulk_operation_remove (bulk, query); bson_destroy (query); /* Add a few documents */ for (i = 1; i < 4; i++) { doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (i)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); } /* {_id: 1} => {$set: {foo: \(dqbar\(dq}} */ query = BCON_NEW (\(dq_id\(dq, BCON_INT32 (1)); doc = BCON_NEW (\(dq$set\(dq, \(dq{\(dq, \(dqfoo\(dq, BCON_UTF8 (\(dqbar\(dq), \(dq}\(dq); mongoc_bulk_operation_update_many_with_opts (bulk, query, doc, NULL, &error); bson_destroy (query); bson_destroy (doc); /* {_id: 4} => {\(aq$inc\(aq: {\(aqj\(aq: 1}} (upsert) */ opts = BCON_NEW (\(dqupsert\(dq, BCON_BOOL (true)); query = BCON_NEW (\(dq_id\(dq, BCON_INT32 (4)); doc = BCON_NEW (\(dq$inc\(dq, \(dq{\(dq, \(dqj\(dq, BCON_INT32 (1), \(dq}\(dq); mongoc_bulk_operation_update_many_with_opts (bulk, query, doc, opts, &error); bson_destroy (query); bson_destroy (doc); bson_destroy (opts); /* replace {j:1} with {j:2} */ query = BCON_NEW (\(dqj\(dq, BCON_INT32 (1)); doc = BCON_NEW (\(dqj\(dq, BCON_INT32 (2)); mongoc_bulk_operation_replace_one_with_opts (bulk, query, doc, NULL, &error); bson_destroy (query); bson_destroy (doc); ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { printf (\(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); mongoc_bulk_operation_destroy (bulk); } int main (void) { mongoc_client_t *client; mongoc_collection_t *collection; const char *uri_string = \(dqmongodb://localhost/?appname=bulk2\-example\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqtest\(dq, \(dqtest\(dq); bulk2 (collection); mongoc_uri_destroy (uri); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Example \fBreply\fP document: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dqnInserted\(dq : 3, \(dqnMatched\(dq : 2, \(dqnModified\(dq : 2, \(dqnRemoved\(dq : 10000, \(dqnUpserted\(dq : 1, \(dqupserted\(dq : [{\(dqindex\(dq : 5, \(dq_id\(dq : 4}], \(dqwriteErrors\(dq : [] \(dqwriteConcernErrors\(dq : [] } .EE .UNINDENT .UNINDENT .sp The \fBindex\fP field in the \fBupserted\fP array is the 0\-based index of the upsert operation; in this example, the sixth operation of the overall bulk operation was an upsert, so its index is 5. .SS Unordered Bulk Write Operations .sp Unordered bulk write operations are batched and sent to the server in \fIarbitrary order\fP where they may be executed in parallel. Any errors that occur are reported after all operations are attempted. .sp In the next example the first and third operations fail due to the unique constraint on \fB_id\fP\&. Since we are doing unordered execution the second and fourth operations succeed. .sp bulk3.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include static void bulk3 (mongoc_collection_t *collection) { bson_t opts = BSON_INITIALIZER; mongoc_bulk_operation_t *bulk; bson_error_t error; bson_t *query; bson_t *doc; bson_t reply; char *str; bool ret; /* false indicates unordered */ BSON_APPEND_BOOL (&opts, \(dqordered\(dq, false); bulk = mongoc_collection_create_bulk_operation_with_opts (collection, &opts); bson_destroy (&opts); /* Add a document */ doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (1)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); /* remove {_id: 2} */ query = BCON_NEW (\(dq_id\(dq, BCON_INT32 (2)); mongoc_bulk_operation_remove_one (bulk, query); bson_destroy (query); /* insert {_id: 3} */ doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (3)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); /* replace {_id:4} {\(aqi\(aq: 1} */ query = BCON_NEW (\(dq_id\(dq, BCON_INT32 (4)); doc = BCON_NEW (\(dqi\(dq, BCON_INT32 (1)); mongoc_bulk_operation_replace_one (bulk, query, doc, false); bson_destroy (query); bson_destroy (doc); ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { printf (\(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); mongoc_bulk_operation_destroy (bulk); bson_destroy (&opts); } int main (void) { mongoc_client_t *client; mongoc_collection_t *collection; const char *uri_string = \(dqmongodb://localhost/?appname=bulk3\-example\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqtest\(dq, \(dqtest\(dq); bulk3 (collection); mongoc_uri_destroy (uri); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Example \fBreply\fP document: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dqnInserted\(dq : 0, \(dqnMatched\(dq : 1, \(dqnModified\(dq : 1, \(dqnRemoved\(dq : 1, \(dqnUpserted\(dq : 0, \(dqwriteErrors\(dq : [ { \(dqindex\(dq : 0, \(dqcode\(dq : 11000, \(dqerrmsg\(dq : \(dqE11000 duplicate key error index: test.test.$_id_ dup key: { : 1 }\(dq }, { \(dqindex\(dq : 2, \(dqcode\(dq : 11000, \(dqerrmsg\(dq : \(dqE11000 duplicate key error index: test.test.$_id_ dup key: { : 3 }\(dq } ], \(dqwriteConcernErrors\(dq : [] } Error: E11000 duplicate key error index: test.test.$_id_ dup key: { : 1 } .EE .UNINDENT .UNINDENT .sp The \fI\%bson_error_t\fP domain is \fBMONGOC_ERROR_COMMAND\fP and its code is 11000. .SS Bulk Operation Bypassing Document Validation .sp This feature is only available when using MongoDB 3.2 and later. .sp By default bulk operations are validated against the schema, if any is defined. In certain cases however it may be necessary to bypass the document validation. .sp bulk5.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include static void bulk5_fail (mongoc_collection_t *collection) { mongoc_bulk_operation_t *bulk; bson_error_t error; bson_t *doc; bson_t reply; char *str; bool ret; bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); /* Two inserts */ doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (31)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (32)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); /* The above documents do not comply to the schema validation rules * we created previously, so this will result in an error */ ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { printf (\(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); mongoc_bulk_operation_destroy (bulk); } static void bulk5_success (mongoc_collection_t *collection) { mongoc_bulk_operation_t *bulk; bson_error_t error; bson_t *doc; bson_t reply; char *str; bool ret; bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); /* Allow this document to bypass document validation. * NOTE: When authentication is enabled, the authenticated user must have * either the \(dqdbadmin\(dq or \(dqrestore\(dq roles to bypass document validation */ mongoc_bulk_operation_set_bypass_document_validation (bulk, true); /* Two inserts */ doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (31)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (32)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { printf (\(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); mongoc_bulk_operation_destroy (bulk); } int main (void) { bson_t *options; bson_error_t error; mongoc_client_t *client; mongoc_collection_t *collection; mongoc_database_t *database; const char *uri_string = \(dqmongodb://localhost/?appname=bulk5\-example\(dq; mongoc_uri_t *uri; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); database = mongoc_client_get_database (client, \(dqtestasdf\(dq); /* Create schema validator */ options = BCON_NEW (\(dqvalidator\(dq, \(dq{\(dq, \(dqnumber\(dq, \(dq{\(dq, \(dq$gte\(dq, BCON_INT32 (5), \(dq}\(dq, \(dq}\(dq); collection = mongoc_database_create_collection (database, \(dqcollname\(dq, options, &error); if (collection) { bulk5_fail (collection); bulk5_success (collection); mongoc_collection_destroy (collection); } else { fprintf (stderr, \(dqCouldn\(aqt create collection: \(aq%s\(aq\en\(dq, error.message); } bson_free (options); mongoc_uri_destroy (uri); mongoc_database_destroy (database); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Running the above example will result in: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dqnInserted\(dq : 0, \(dqnMatched\(dq : 0, \(dqnModified\(dq : 0, \(dqnRemoved\(dq : 0, \(dqnUpserted\(dq : 0, \(dqwriteErrors\(dq : [ { \(dqindex\(dq : 0, \(dqcode\(dq : 121, \(dqerrmsg\(dq : \(dqDocument failed validation\(dq } ] } Error: Document failed validation { \(dqnInserted\(dq : 2, \(dqnMatched\(dq : 0, \(dqnModified\(dq : 0, \(dqnRemoved\(dq : 0, \(dqnUpserted\(dq : 0, \(dqwriteErrors\(dq : [] } .EE .UNINDENT .UNINDENT .sp The \fI\%bson_error_t\fP domain is \fBMONGOC_ERROR_COMMAND\fP\&. .SS Bulk Operation Write Concerns .sp By default bulk operations are executed with the \fI\%write_concern\fP of the collection they are executed against. A custom write concern can be passed to the \fI\%mongoc_collection_create_bulk_operation_with_opts()\fP method. Write concern errors (e.g. wtimeout) will be reported after all operations are attempted, regardless of execution order. .sp bulk4.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include static void bulk4 (mongoc_collection_t *collection) { bson_t opts = BSON_INITIALIZER; mongoc_write_concern_t *wc; mongoc_bulk_operation_t *bulk; bson_error_t error; bson_t *doc; bson_t reply; char *str; bool ret; wc = mongoc_write_concern_new (); mongoc_write_concern_set_w (wc, 4); mongoc_write_concern_set_wtimeout_int64 (wc, 100); /* milliseconds */ mongoc_write_concern_append (wc, &opts); bulk = mongoc_collection_create_bulk_operation_with_opts (collection, &opts); /* Two inserts */ doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (10)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (11)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { printf (\(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); mongoc_bulk_operation_destroy (bulk); mongoc_write_concern_destroy (wc); bson_destroy (&opts); } int main (void) { mongoc_client_t *client; mongoc_collection_t *collection; const char *uri_string = \(dqmongodb://localhost/?appname=bulk4\-example\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqtest\(dq, \(dqtest\(dq); bulk4 (collection); mongoc_uri_destroy (uri); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Example \fBreply\fP document and error message: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dqnInserted\(dq : 2, \(dqnMatched\(dq : 0, \(dqnModified\(dq : 0, \(dqnRemoved\(dq : 0, \(dqnUpserted\(dq : 0, \(dqwriteErrors\(dq : [], \(dqwriteConcernErrors\(dq : [ { \(dqcode\(dq : 64, \(dqerrmsg\(dq : \(dqwaiting for replication timed out\(dq } ] } Error: waiting for replication timed out .EE .UNINDENT .UNINDENT .sp The \fI\%bson_error_t\fP domain is \fBMONGOC_ERROR_WRITE_CONCERN\fP if there are write concern errors and no write errors. Write errors indicate failed operations, so they take precedence over write concern errors, which mean merely that the write concern is not satisfied \fIyet\fP\&. .SS Setting Collation Order .sp This feature is only available when using MongoDB 3.4 and later. .sp bulk\-collation.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include static void bulk_collation (mongoc_collection_t *collection) { mongoc_bulk_operation_t *bulk; bson_t *opts; bson_t *doc; bson_t *selector; bson_t *update; bson_error_t error; bson_t reply; char *str; uint32_t ret; /* insert {_id: \(dqone\(dq} and {_id: \(dqOne\(dq} */ bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); doc = BCON_NEW (\(dq_id\(dq, BCON_UTF8 (\(dqone\(dq)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); doc = BCON_NEW (\(dq_id\(dq, BCON_UTF8 (\(dqOne\(dq)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); /* \(dqOne\(dq normally sorts before \(dqone\(dq; make \(dqone\(dq come first */ opts = BCON_NEW (\(dqcollation\(dq, \(dq{\(dq, \(dqlocale\(dq, BCON_UTF8 (\(dqen_US\(dq), \(dqcaseFirst\(dq, BCON_UTF8 (\(dqlower\(dq), \(dq}\(dq); /* set x=1 on the document with _id \(dqOne\(dq, which now sorts after \(dqone\(dq */ update = BCON_NEW (\(dq$set\(dq, \(dq{\(dq, \(dqx\(dq, BCON_INT64 (1), \(dq}\(dq); selector = BCON_NEW (\(dq_id\(dq, \(dq{\(dq, \(dq$gt\(dq, BCON_UTF8 (\(dqone\(dq), \(dq}\(dq); mongoc_bulk_operation_update_one_with_opts (bulk, selector, update, opts, &error); ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { printf (\(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); bson_destroy (update); bson_destroy (selector); bson_destroy (opts); mongoc_bulk_operation_destroy (bulk); } int main (void) { mongoc_client_t *client; mongoc_collection_t *collection; const char *uri_string = \(dqmongodb://localhost/?appname=bulk\-collation\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqdb\(dq, \(dqcollection\(dq); bulk_collation (collection); mongoc_uri_destroy (uri); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp Running the above example will result in: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dqnInserted\(dq : 2, \(dqnMatched\(dq : 1, \(dqnModified\(dq : 1, \(dqnRemoved\(dq : 0, \(dqnUpserted\(dq : 0, \(dqwriteErrors\(dq : [ ] } .EE .UNINDENT .UNINDENT .SS Unacknowledged Bulk Writes .sp Set \(dqw\(dq to zero for an unacknowledged write. The driver sends unacknowledged writes using the legacy opcodes \fBOP_INSERT\fP, \fBOP_UPDATE\fP, and \fBOP_DELETE\fP\&. .sp bulk6.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include static void bulk6 (mongoc_collection_t *collection) { bson_t opts = BSON_INITIALIZER; mongoc_write_concern_t *wc; mongoc_bulk_operation_t *bulk; bson_error_t error; bson_t *doc; bson_t *selector; bson_t reply; char *str; bool ret; wc = mongoc_write_concern_new (); mongoc_write_concern_set_w (wc, 0); mongoc_write_concern_append (wc, &opts); bulk = mongoc_collection_create_bulk_operation_with_opts (collection, &opts); doc = BCON_NEW (\(dq_id\(dq, BCON_INT32 (10)); mongoc_bulk_operation_insert (bulk, doc); bson_destroy (doc); selector = BCON_NEW (\(dq_id\(dq, BCON_INT32 (11)); mongoc_bulk_operation_remove_one (bulk, selector); bson_destroy (selector); ret = mongoc_bulk_operation_execute (bulk, &reply, &error); str = bson_as_canonical_extended_json (&reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); if (!ret) { printf (\(dqError: %s\en\(dq, error.message); } bson_destroy (&reply); mongoc_bulk_operation_destroy (bulk); mongoc_write_concern_destroy (wc); bson_destroy (&opts); } int main (void) { mongoc_client_t *client; mongoc_collection_t *collection; const char *uri_string = \(dqmongodb://localhost/?appname=bulk6\-example\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqtest\(dq, \(dqtest\(dq); bulk6 (collection); mongoc_uri_destroy (uri); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp The \fBreply\fP document is empty: .INDENT 0.0 .INDENT 3.5 .sp .EX { } .EE .UNINDENT .UNINDENT .SS Further Reading .sp See the \fI\%Driver Bulk API Spec\fP, which describes bulk write operations for all MongoDB drivers. .SH AGGREGATION FRAMEWORK EXAMPLES .sp This document provides a number of practical examples that display the capabilities of the aggregation framework. .sp The \fI\%Aggregations using the Zip Codes Data Set\fP examples uses a publicly available data set of all zipcodes and populations in the United States. These data are available at: \fI\%zips.json\fP\&. .SS Requirements .sp Let\(aqs check if everything is installed. .sp Use the following command to load zips.json data set into mongod instance: .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongoimport \-\-drop \-d test \-c zipcodes zips.json .EE .UNINDENT .UNINDENT .sp Let\(aqs use the MongoDB shell to verify that everything was imported successfully. .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongo test connecting to: test > db.zipcodes.count() 29467 > db.zipcodes.findOne() { \(dq_id\(dq : \(dq35004\(dq, \(dqcity\(dq : \(dqACMAR\(dq, \(dqloc\(dq : [ \-86.51557, 33.584132 ], \(dqpop\(dq : 6055, \(dqstate\(dq : \(dqAL\(dq } .EE .UNINDENT .UNINDENT .SS Aggregations using the Zip Codes Data Set .sp Each document in this collection has the following form: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dq_id\(dq : \(dq35004\(dq, \(dqcity\(dq : \(dqAcmar\(dq, \(dqstate\(dq : \(dqAL\(dq, \(dqpop\(dq : 6055, \(dqloc\(dq : [\-86.51557, 33.584132] } .EE .UNINDENT .UNINDENT .sp In these documents: .INDENT 0.0 .IP \(bu 2 The \fB_id\fP field holds the zipcode as a string. .IP \(bu 2 The \fBcity\fP field holds the city name. .IP \(bu 2 The \fBstate\fP field holds the two letter state abbreviation. .IP \(bu 2 The \fBpop\fP field holds the population. .IP \(bu 2 The \fBloc\fP field holds the location as a \fB[latitude, longitude]\fP array. .UNINDENT .SS States with Populations Over 10 Million .sp To get all states with a population greater than 10 million, use the following aggregation pipeline: .sp aggregation1.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include static void print_pipeline (mongoc_collection_t *collection) { mongoc_cursor_t *cursor; bson_error_t error; const bson_t *doc; bson_t *pipeline; char *str; pipeline = BCON_NEW (\(dqpipeline\(dq, \(dq[\(dq, \(dq{\(dq, \(dq$group\(dq, \(dq{\(dq, \(dq_id\(dq, \(dq$state\(dq, \(dqtotal_pop\(dq, \(dq{\(dq, \(dq$sum\(dq, \(dq$pop\(dq, \(dq}\(dq, \(dq}\(dq, \(dq}\(dq, \(dq{\(dq, \(dq$match\(dq, \(dq{\(dq, \(dqtotal_pop\(dq, \(dq{\(dq, \(dq$gte\(dq, BCON_INT32 (10000000), \(dq}\(dq, \(dq}\(dq, \(dq}\(dq, \(dq]\(dq); cursor = mongoc_collection_aggregate (collection, MONGOC_QUERY_NONE, pipeline, NULL, NULL); while (mongoc_cursor_next (cursor, &doc)) { str = bson_as_canonical_extended_json (doc, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } if (mongoc_cursor_error (cursor, &error)) { fprintf (stderr, \(dqCursor Failure: %s\en\(dq, error.message); } mongoc_cursor_destroy (cursor); bson_destroy (pipeline); } int main (void) { mongoc_client_t *client; mongoc_collection_t *collection; const char *uri_string = \(dqmongodb://localhost:27017/?appname=aggregation\-example\(dq; mongoc_uri_t *uri; bson_error_t error; mongoc_init (); uri = mongoc_uri_new_with_error (uri_string, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, uri_string, error.message); return EXIT_FAILURE; } client = mongoc_client_new_from_uri (uri); if (!client) { return EXIT_FAILURE; } mongoc_client_set_error_api (client, 2); collection = mongoc_client_get_collection (client, \(dqtest\(dq, \(dqzipcodes\(dq); print_pipeline (collection); mongoc_uri_destroy (uri); mongoc_collection_destroy (collection); mongoc_client_destroy (client); mongoc_cleanup (); return EXIT_SUCCESS; } .EE .UNINDENT .UNINDENT .sp You should see a result like the following: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dq_id\(dq : \(dqPA\(dq, \(dqtotal_pop\(dq : 11881643 } { \(dq_id\(dq : \(dqOH\(dq, \(dqtotal_pop\(dq : 10847115 } { \(dq_id\(dq : \(dqNY\(dq, \(dqtotal_pop\(dq : 17990455 } { \(dq_id\(dq : \(dqFL\(dq, \(dqtotal_pop\(dq : 12937284 } { \(dq_id\(dq : \(dqTX\(dq, \(dqtotal_pop\(dq : 16986510 } { \(dq_id\(dq : \(dqIL\(dq, \(dqtotal_pop\(dq : 11430472 } { \(dq_id\(dq : \(dqCA\(dq, \(dqtotal_pop\(dq : 29760021 } .EE .UNINDENT .UNINDENT .sp The above aggregation pipeline is build from two pipeline operators: \fB$group\fP and \fB$match\fP\&. .sp The \fB$group\fP pipeline operator requires _id field where we specify grouping; remaining fields specify how to generate composite value and must use one of the group aggregation functions: \fB$addToSet\fP, \fB$first\fP, \fB$last\fP, \fB$max\fP, \fB$min\fP, \fB$avg\fP, \fB$push\fP, \fB$sum\fP\&. The \fB$match\fP pipeline operator syntax is the same as the read operation query syntax. .sp The \fB$group\fP process reads all documents and for each state it creates a separate document, for example: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dq_id\(dq : \(dqWA\(dq, \(dqtotal_pop\(dq : 4866692 } .EE .UNINDENT .UNINDENT .sp The \fBtotal_pop\fP field uses the $sum aggregation function to sum the values of all pop fields in the source documents. .sp Documents created by \fB$group\fP are piped to the \fB$match\fP pipeline operator. It returns the documents with the value of \fBtotal_pop\fP field greater than or equal to 10 million. .SS Average City Population by State .sp To get the first three states with the greatest average population per city, use the following aggregation: .INDENT 0.0 .INDENT 3.5 .sp .EX pipeline = BCON_NEW (\(dqpipeline\(dq, \(dq[\(dq, \(dq{\(dq, \(dq$group\(dq, \(dq{\(dq, \(dq_id\(dq, \(dq{\(dq, \(dqstate\(dq, \(dq$state\(dq, \(dqcity\(dq, \(dq$city\(dq, \(dq}\(dq, \(dqpop\(dq, \(dq{\(dq, \(dq$sum\(dq, \(dq$pop\(dq, \(dq}\(dq, \(dq}\(dq, \(dq}\(dq, \(dq{\(dq, \(dq$group\(dq, \(dq{\(dq, \(dq_id\(dq, \(dq$_id.state\(dq, \(dqavg_city_pop\(dq, \(dq{\(dq, \(dq$avg\(dq, \(dq$pop\(dq, \(dq}\(dq, \(dq}\(dq, \(dq}\(dq, \(dq{\(dq, \(dq$sort\(dq, \(dq{\(dq, \(dqavg_city_pop\(dq, BCON_INT32 (\-1), \(dq}\(dq, \(dq}\(dq, \(dq{\(dq, \(dq$limit\(dq, BCON_INT32 (3) \(dq}\(dq, \(dq]\(dq); .EE .UNINDENT .UNINDENT .sp This aggregate pipeline produces: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dq_id\(dq : \(dqDC\(dq, \(dqavg_city_pop\(dq : 303450.0 } { \(dq_id\(dq : \(dqFL\(dq, \(dqavg_city_pop\(dq : 27942.29805615551 } { \(dq_id\(dq : \(dqCA\(dq, \(dqavg_city_pop\(dq : 27735.341099720412 } .EE .UNINDENT .UNINDENT .sp The above aggregation pipeline is build from three pipeline operators: \fB$group\fP, \fB$sort\fP and \fB$limit\fP\&. .sp The first \fB$group\fP operator creates the following documents: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dq_id\(dq : { \(dqstate\(dq : \(dqWY\(dq, \(dqcity\(dq : \(dqSmoot\(dq }, \(dqpop\(dq : 414 } .EE .UNINDENT .UNINDENT .sp Note, that the \fB$group\fP operator can\(aqt use nested documents except the \fB_id\fP field. .sp The second \fB$group\fP uses these documents to create the following documents: .INDENT 0.0 .INDENT 3.5 .sp .EX { \(dq_id\(dq : \(dqFL\(dq, \(dqavg_city_pop\(dq : 27942.29805615551 } .EE .UNINDENT .UNINDENT .sp These documents are sorted by the \fBavg_city_pop\fP field in descending order. Finally, the \fB$limit\fP pipeline operator returns the first 3 documents from the sorted set. .SH "DISTINCT" AND "MAPREDUCE" .sp This document provides some practical, simple, examples to demonstrate the \fBdistinct\fP and \fBmapReduce\fP commands. .SS Setup .sp First we\(aqll write some code to insert sample data: .sp doc\-common\-insert.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* Don\(aqt try to compile this file on its own. It\(aqs meant to be #included by example code */ /* Insert some sample data */ bool insert_data (mongoc_collection_t *collection) { mongoc_bulk_operation_t *bulk; enum N { ndocs = 4 }; bson_t *docs[ndocs]; bson_error_t error; int i = 0; bool ret; bulk = mongoc_collection_create_bulk_operation_with_opts (collection, NULL); docs[0] = BCON_NEW (\(dqx\(dq, BCON_DOUBLE (1.0), \(dqtags\(dq, \(dq[\(dq, \(dqdog\(dq, \(dqcat\(dq, \(dq]\(dq); docs[1] = BCON_NEW (\(dqx\(dq, BCON_DOUBLE (2.0), \(dqtags\(dq, \(dq[\(dq, \(dqcat\(dq, \(dq]\(dq); docs[2] = BCON_NEW (\(dqx\(dq, BCON_DOUBLE (2.0), \(dqtags\(dq, \(dq[\(dq, \(dqmouse\(dq, \(dqcat\(dq, \(dqdog\(dq, \(dq]\(dq); docs[3] = BCON_NEW (\(dqx\(dq, BCON_DOUBLE (3.0), \(dqtags\(dq, \(dq[\(dq, \(dq]\(dq); for (i = 0; i < ndocs; i++) { mongoc_bulk_operation_insert (bulk, docs[i]); bson_destroy (docs[i]); docs[i] = NULL; } ret = mongoc_bulk_operation_execute (bulk, NULL, &error); if (!ret) { fprintf (stderr, \(dqError inserting data: %s\en\(dq, error.message); } mongoc_bulk_operation_destroy (bulk); return ret; } /* A helper which we\(aqll use a lot later on */ void print_res (const bson_t *reply) { char *str; BSON_ASSERT (reply); str = bson_as_canonical_extended_json (reply, NULL); printf (\(dq%s\en\(dq, str); bson_free (str); } .EE .UNINDENT .UNINDENT .SS \(dqdistinct\(dq command .sp This is how to use the \fBdistinct\fP command to get the distinct values of \fBx\fP which are greater than \fB1\fP: .sp distinct.c .INDENT 0.0 .INDENT 3.5 .sp .EX bool distinct (mongoc_database_t *database) { bson_t *command; bson_t reply; bson_error_t error; bool res; bson_iter_t iter; bson_iter_t array_iter; double val; command = BCON_NEW (\(dqdistinct\(dq, BCON_UTF8 (COLLECTION_NAME), \(dqkey\(dq, BCON_UTF8 (\(dqx\(dq), \(dqquery\(dq, \(dq{\(dq, \(dqx\(dq, \(dq{\(dq, \(dq$gt\(dq, BCON_DOUBLE (1.0), \(dq}\(dq, \(dq}\(dq); res = mongoc_database_command_simple (database, command, NULL, &reply, &error); if (!res) { fprintf (stderr, \(dqError with distinct: %s\en\(dq, error.message); goto cleanup; } /* Do something with reply (in this case iterate through the values) */ if (!(bson_iter_init_find (&iter, &reply, \(dqvalues\(dq) && BSON_ITER_HOLDS_ARRAY (&iter) && bson_iter_recurse (&iter, &array_iter))) { fprintf (stderr, \(dqCouldn\(aqt extract \e\(dqvalues\e\(dq field from response\en\(dq); goto cleanup; } while (bson_iter_next (&array_iter)) { if (BSON_ITER_HOLDS_DOUBLE (&array_iter)) { val = bson_iter_double (&array_iter); printf (\(dqNext double: %f\en\(dq, val); } } cleanup: /* cleanup */ bson_destroy (command); bson_destroy (&reply); return res; } .EE .UNINDENT .UNINDENT .SS \(dqmapReduce\(dq \- basic example .sp A simple example using the map reduce framework. It simply adds up the number of occurrences of each \(dqtag\(dq. .sp First define the \fBmap\fP and \fBreduce\fP functions: .sp constants.c .INDENT 0.0 .INDENT 3.5 .sp .EX const char *const COLLECTION_NAME = \(dqthings\(dq; /* Our map function just emits a single (key, 1) pair for each tag in the array: */ const char *const MAPPER = \(dqfunction () {\(dq \(dqthis.tags.forEach(function(z) {\(dq \(dqemit(z, 1);\(dq \(dq});\(dq \(dq}\(dq; /* The reduce function sums over all of the emitted values for a given key: */ const char *const REDUCER = \(dqfunction (key, values) {\(dq \(dqvar total = 0;\(dq \(dqfor (var i = 0; i < values.length; i++) {\(dq \(dqtotal += values[i];\(dq \(dq}\(dq \(dqreturn total;\(dq \(dq}\(dq; /* Note We can\(aqt just return values.length as the reduce function might be called iteratively on the results of other reduce steps. */ .EE .UNINDENT .UNINDENT .sp Run the \fBmapReduce\fP command. Use the generic command helpers (e.g. \fI\%mongoc_database_command_simple()\fP). Do not the read command helpers (e.g. \fI\%mongoc_database_read_command_with_opts()\fP) because they are considered retryable read operations. If retryable reads are enabled, those operations will retry once on a retryable error, giving undesirable behavior for \fBmapReduce\fP\&. .sp map\-reduce\-basic.c .INDENT 0.0 .INDENT 3.5 .sp .EX bool map_reduce_basic (mongoc_database_t *database) { bson_t reply; bool res = false; bson_error_t error; mongoc_cursor_t *cursor = NULL; bool query_done = false; const char *out_collection_name = \(dqoutCollection\(dq; mongoc_collection_t *out_collection = NULL; /* Empty find query */ bson_t find_query = BSON_INITIALIZER; /* Construct the mapReduce command */ /* Other arguments can also be specified here, like \(dqquery\(dq or \(dqlimit\(dq and so on */ bson_t *const command = BCON_NEW (\(dqmapReduce\(dq, BCON_UTF8 (COLLECTION_NAME), \(dqmap\(dq, BCON_CODE (MAPPER), \(dqreduce\(dq, BCON_CODE (REDUCER), \(dqout\(dq, BCON_UTF8 (out_collection_name)); res = mongoc_database_command_simple (database, command, NULL, &reply, &error); if (!res) { fprintf (stderr, \(dqMapReduce failed: %s\en\(dq, error.message); goto cleanup; } /* Do something with the reply (it doesn\(aqt contain the mapReduce results) */ print_res (&reply); /* Now we\(aqll query outCollection to see what the results are */ out_collection = mongoc_database_get_collection (database, out_collection_name); cursor = mongoc_collection_find_with_opts (out_collection, &find_query, NULL, NULL); query_done = true; /* Do something with the results */ const bson_t *doc = NULL; while (mongoc_cursor_next (cursor, &doc)) { print_res (doc); } if (mongoc_cursor_error (cursor, &error)) { fprintf (stderr, \(dqERROR: %s\en\(dq, error.message); res = false; goto cleanup; } cleanup: /* cleanup */ if (query_done) { mongoc_cursor_destroy (cursor); mongoc_collection_destroy (out_collection); } bson_destroy (&reply); bson_destroy (command); return res; } .EE .UNINDENT .UNINDENT .SS \(dqmapReduce\(dq \- more complicated example .sp You must have replica set running for this. .sp In this example we contact a secondary in the replica set and do an \(dqinline\(dq map reduce, so the results are returned immediately: .sp map\-reduce\-advanced.c .INDENT 0.0 .INDENT 3.5 .sp .EX bool map_reduce_advanced (mongoc_database_t *database) { bson_t *command; bson_error_t error; bool res = true; mongoc_cursor_t *cursor; mongoc_read_prefs_t *read_pref; const bson_t *doc; /* Construct the mapReduce command */ /* Other arguments can also be specified here, like \(dqquery\(dq or \(dqlimit\(dq and so on */ /* Read the results inline from a secondary replica */ command = BCON_NEW (\(dqmapReduce\(dq, BCON_UTF8 (COLLECTION_NAME), \(dqmap\(dq, BCON_CODE (MAPPER), \(dqreduce\(dq, BCON_CODE (REDUCER), \(dqout\(dq, \(dq{\(dq, \(dqinline\(dq, \(dq1\(dq, \(dq}\(dq); read_pref = mongoc_read_prefs_new (MONGOC_READ_SECONDARY); cursor = mongoc_database_command (database, MONGOC_QUERY_NONE, 0, 0, 0, command, NULL, read_pref); /* Do something with the results */ while (mongoc_cursor_next (cursor, &doc)) { print_res (doc); } if (mongoc_cursor_error (cursor, &error)) { fprintf (stderr, \(dqERROR: %s\en\(dq, error.message); res = false; } mongoc_cursor_destroy (cursor); mongoc_read_prefs_destroy (read_pref); bson_destroy (command); return res; } .EE .UNINDENT .UNINDENT .SS Running the Examples .sp Here\(aqs how to run the example code .sp basic\-aggregation.c .INDENT 0.0 .INDENT 3.5 .sp .EX /* * Copyright 2016 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the \(dqLicense\(dq); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE\-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an \(dqAS IS\(dq BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include #include #include \(dqconstants.c\(dq #include \(dq../doc\-common\-insert.c\(dq #include \(dqdistinct.c\(dq #include \(dqmap\-reduce\-basic.c\(dq #include \(dqmap\-reduce\-advanced.c\(dq int main (int argc, char *argv[]) { mongoc_database_t *database = NULL; mongoc_client_t *client = NULL; mongoc_collection_t *collection = NULL; mongoc_uri_t *uri = NULL; bson_error_t error; char *host_and_port = NULL; int exit_code = EXIT_FAILURE; if (argc != 2) { fprintf (stderr, \(dqusage: %s CONNECTION\-STRING\en\(dq, argv[0]); fprintf (stderr, \(dqthe connection string can be of the following forms:\en\(dq); fprintf (stderr, \(dqlocalhost\et\et\et\etlocal machine\en\(dq); fprintf (stderr, \(dqlocalhost:27018\et\et\et\etlocal machine on port 27018\en\(dq); fprintf (stderr, \(dqmongodb://user:pass@localhost:27017\et\(dq \(dqlocal machine on port 27017, and authenticate with username \(dq \(dquser and password pass\en\(dq); return exit_code; } mongoc_init (); if (strncmp (argv[1], \(dqmongodb://\(dq, 10) == 0) { host_and_port = bson_strdup (argv[1]); } else { host_and_port = bson_strdup_printf (\(dqmongodb://%s\(dq, argv[1]); } uri = mongoc_uri_new_with_error (host_and_port, &error); if (!uri) { fprintf (stderr, \(dqfailed to parse URI: %s\en\(dq \(dqerror message: %s\en\(dq, host_and_port, error.message); goto cleanup; } client = mongoc_client_new_from_uri (uri); if (!client) { goto cleanup; } mongoc_client_set_error_api (client, 2); database = mongoc_client_get_database (client, \(dqtest\(dq); collection = mongoc_database_get_collection (database, COLLECTION_NAME); printf (\(dqInserting data\en\(dq); if (!insert_data (collection)) { goto cleanup; } printf (\(dqdistinct\en\(dq); if (!distinct (database)) { goto cleanup; } printf (\(dqmap reduce\en\(dq); if (!map_reduce_basic (database)) { goto cleanup; } printf (\(dqmore complicated map reduce\en\(dq); if (!map_reduce_advanced (database)) { goto cleanup; } exit_code = EXIT_SUCCESS; cleanup: if (collection) { mongoc_collection_destroy (collection); } if (database) { mongoc_database_destroy (database); } if (client) { mongoc_client_destroy (client); } if (uri) { mongoc_uri_destroy (uri); } if (host_and_port) { bson_free (host_and_port); } mongoc_cleanup (); return exit_code; } .EE .UNINDENT .UNINDENT .sp If you want to try the advanced map reduce example with a secondary, start a replica set (instructions for how to do this can be found \fI\%here\fP). .sp Otherwise, just start an instance of MongoDB: .INDENT 0.0 .INDENT 3.5 .sp .EX $ mongod .EE .UNINDENT .UNINDENT .sp Now compile and run the example program: .INDENT 0.0 .INDENT 3.5 .sp .EX $ cd examples/basic_aggregation/ $ gcc \-Wall \-o agg\-example basic\-aggregation.c $(pkg\-config \-\-cflags \-\-libs libmongoc\-1.0) $ ./agg\-example localhost Inserting data distinct Next double: 2.000000 Next double: 3.000000 map reduce { \(dqresult\(dq : \(dqoutCollection\(dq, \(dqtimeMillis\(dq : 155, \(dqcounts\(dq : { \(dqinput\(dq : 84, \(dqemit\(dq : 126, \(dqreduce\(dq : 3, \(dqoutput\(dq : 3 }, \(dqok\(dq : 1 } { \(dq_id\(dq : \(dqcat\(dq, \(dqvalue\(dq : 63 } { \(dq_id\(dq : \(dqdog\(dq, \(dqvalue\(dq : 42 } { \(dq_id\(dq : \(dqmouse\(dq, \(dqvalue\(dq : 21 } more complicated map reduce { \(dqresults\(dq : [ { \(dq_id\(dq : \(dqcat\(dq, \(dqvalue\(dq : 63 }, { \(dq_id\(dq : \(dqdog\(dq, \(dqvalue\(dq : 42 }, { \(dq_id\(dq : \(dqmouse\(dq, \(dqvalue\(dq : 21 } ], \(dqtimeMillis\(dq : 14, \(dqcounts\(dq : { \(dqinput\(dq : 84, \(dqemit\(dq : 126, \(dqreduce\(dq : 3, \(dqoutput\(dq : 3 }, \(dqok\(dq : 1 } .EE .UNINDENT .UNINDENT .SH USING LIBMONGOC IN A MICROSOFT VISUAL STUDIO PROJECT .sp \fI\%Download and install libmongoc on your system\fP, then open Visual Studio, select \(dqFile→New→Project...\(dq, and create a new Win32 Console Application. [image] .sp Remember to switch the platform from 32\-bit to 64\-bit: [image] .sp Right\-click on your console application in the Solution Explorer and select \(dqProperties\(dq. Choose to edit properties for \(dqAll Configurations\(dq, expand the \(dqC/C++\(dq options and choose \(dqGeneral\(dq. Add to the \(dqAdditional Include Directories\(dq these paths: .INDENT 0.0 .INDENT 3.5 .sp .EX C:\emongo\-c\-driver\einclude\elibbson\-1.0 C:\emongo\-c\-driver\einclude\elibmongoc\-1.0 .EE .UNINDENT .UNINDENT [image] .sp (If you chose a different \fB$PREFIX\fP \fI\%when you installed mongo\-c\-driver\fP, your include paths will be different.) .sp Also in the Properties dialog, expand the \(dqLinker\(dq options and choose \(dqInput\(dq, and add to the \(dqAdditional Dependencies\(dq these libraries: .INDENT 0.0 .INDENT 3.5 .sp .EX C:\emongo\-c\-driver\elib\ebson\-1.0.lib C:\emongo\-c\-driver\elib\emongoc\-1.0.lib .EE .UNINDENT .UNINDENT [image] .sp Adding these libraries as dependencies provides linker symbols to build your application, but to actually run it, libbson\(aqs and libmongoc\(aqs DLLs must be in your executable path. Select \(dqDebugging\(dq in the Properties dialog, and set the \(dqEnvironment\(dq option to: .INDENT 0.0 .INDENT 3.5 .sp .EX PATH=c:/mongo\-c\-driver/bin .EE .UNINDENT .UNINDENT [image] .sp Finally, include \(dqmongoc/mongoc.h\(dq in your project\(aqs \(dqstdafx.h\(dq: .INDENT 0.0 .INDENT 3.5 .sp .EX #include .EE .UNINDENT .UNINDENT .SS Static linking .sp Following the instructions above, you have dynamically linked your application to the libbson and libmongoc DLLs. This is usually the right choice. If you want to link statically instead, update your \(dqAdditional Dependencies\(dq list by removing \fBbson\-1.0.lib\fP and \fBmongoc\-1.0.lib\fP and replacing them with these libraries: .INDENT 0.0 .INDENT 3.5 .sp .EX C:\emongo\-c\-driver\elib\ebson\-static\-1.0.lib C:\emongo\-c\-driver\elib\emongoc\-static\-1.0.lib ws2_32.lib Secur32.lib Crypt32.lib BCrypt.lib .EE .UNINDENT .UNINDENT [image] .sp (To explain the purpose of each library: \fBbson\-static\-1.0.lib\fP and \fBmongoc\-static\-1.0.lib\fP are static archives of the driver code. The socket library \fBws2_32\fP is required by libbson, which uses the socket routine \fBgethostname\fP to help guarantee ObjectId uniqueness. The \fBBCrypt\fP library is used by libmongoc for TLS connections to MongoDB, and \fBSecur32\fP and \fBCrypt32\fP are required for enterprise authentication methods like Kerberos.) .sp Finally, define two preprocessor symbols before including \fBmongoc/mongoc.h\fP in your \fBstdafx.h\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX #define BSON_STATIC #define MONGOC_STATIC #include .EE .UNINDENT .UNINDENT .sp Making these changes to your project is only required for static linking; for most people, the dynamic\-linking instructions above are preferred. .SS Next Steps .sp Now you can build and debug applications in Visual Studio that use libbson and libmongoc. Proceed to \fI\%Making a Connection\fP in the tutorial to learn how connect to MongoDB and perform operations. .SH MANAGE COLLECTION INDEXES .sp To create indexes on a MongoDB collection, use \fI\%mongoc_collection_create_indexes_with_opts()\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX // \(gakeys\(ga represents an ascending index on field \(gax\(ga. bson_t *keys = BCON_NEW (\(dqx\(dq, BCON_INT32 (1)); mongoc_index_model_t *im = mongoc_index_model_new (keys, NULL /* opts */); if (mongoc_collection_create_indexes_with_opts (coll, &im, 1, NULL /* opts */, NULL /* reply */, &error)) { printf (\(dqSuccessfully created index\en\(dq); } else { bson_destroy (keys); HANDLE_ERROR (\(dqFailed to create index: %s\(dq, error.message); } bson_destroy (keys); .EE .UNINDENT .UNINDENT .sp To list indexes, use \fI\%mongoc_collection_find_indexes_with_opts()\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX mongoc_cursor_t *cursor = mongoc_collection_find_indexes_with_opts (coll, NULL /* opts */); printf (\(dqListing indexes:\en\(dq); const bson_t *got; while (mongoc_cursor_next (cursor, &got)) { char *got_str = bson_as_canonical_extended_json (got, NULL); printf (\(dq %s\en\(dq, got_str); bson_free (got_str); } if (mongoc_cursor_error (cursor, &error)) { mongoc_cursor_destroy (cursor); HANDLE_ERROR (\(dqFailed to list indexes: %s\(dq, error.message); } mongoc_cursor_destroy (cursor); .EE .UNINDENT .UNINDENT .sp To drop an index, use \fI\%mongoc_collection_drop_index_with_opts()\fP\&. The index name may be obtained from the \fBkeys\fP document with \fI\%mongoc_collection_keys_to_index_string()\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX bson_t *keys = BCON_NEW (\(dqx\(dq, BCON_INT32 (1)); char *index_name = mongoc_collection_keys_to_index_string (keys); if (mongoc_collection_drop_index_with_opts (coll, index_name, NULL /* opts */, &error)) { printf (\(dqSuccessfully dropped index\en\(dq); } else { bson_free (index_name); bson_destroy (keys); HANDLE_ERROR (\(dqFailed to drop index: %s\(dq, error.message); } bson_free (index_name); bson_destroy (keys); .EE .UNINDENT .UNINDENT .sp For a full example, see \fI\%example\-manage\-collection\-indexes.c\fP\&. .SS Manage Atlas Search Indexes .sp To create an Atlas Search Index, use the \fBcreateSearchIndexes\fP command: .INDENT 0.0 .INDENT 3.5 .sp .EX bson_t cmd; // Create command. { char *cmd_str = bson_strdup_printf ( BSON_STR ({ \(dqcreateSearchIndexes\(dq : \(dq%s\(dq, \(dqindexes\(dq : [ {\(dqdefinition\(dq : {\(dqmappings\(dq : {\(dqdynamic\(dq : false}}, \(dqname\(dq : \(dqtest\-index\(dq} ] }), collname); ASSERT (bson_init_from_json (&cmd, cmd_str, \-1, &error)); bson_free (cmd_str); } if (!mongoc_collection_command_simple (coll, &cmd, NULL /* read_prefs */, NULL /* reply */, &error)) { bson_destroy (&cmd); HANDLE_ERROR (\(dqFailed to run createSearchIndexes: %s\(dq, error.message); } printf (\(dqCreated index: \e\(dqtest\-index\e\(dq\en\(dq); bson_destroy (&cmd); .EE .UNINDENT .UNINDENT .sp To list Atlas Search Indexes, use the \fB$listSearchIndexes\fP aggregation stage: .INDENT 0.0 .INDENT 3.5 .sp .EX const char *pipeline_str = BSON_STR ({\(dqpipeline\(dq : [ {\(dq$listSearchIndexes\(dq : {}} ]}); bson_t pipeline; ASSERT (bson_init_from_json (&pipeline, pipeline_str, \-1, &error)); mongoc_cursor_t *cursor = mongoc_collection_aggregate (coll, MONGOC_QUERY_NONE, &pipeline, NULL /* opts */, NULL /* read_prefs */); printf (\(dqListing indexes:\en\(dq); const bson_t *got; while (mongoc_cursor_next (cursor, &got)) { char *got_str = bson_as_canonical_extended_json (got, NULL); printf (\(dq %s\en\(dq, got_str); bson_free (got_str); } if (mongoc_cursor_error (cursor, &error)) { bson_destroy (&pipeline); mongoc_cursor_destroy (cursor); HANDLE_ERROR (\(dqFailed to run $listSearchIndexes: %s\(dq, error.message); } bson_destroy (&pipeline); mongoc_cursor_destroy (cursor); .EE .UNINDENT .UNINDENT .sp To update an Atlas Search Index, use the \fBupdateSearchIndex\fP command: .INDENT 0.0 .INDENT 3.5 .sp .EX bson_t cmd; // Create command. { char *cmd_str = bson_strdup_printf ( BSON_STR ( {\(dqupdateSearchIndex\(dq : \(dq%s\(dq, \(dqdefinition\(dq : {\(dqmappings\(dq : {\(dqdynamic\(dq : true}}, \(dqname\(dq : \(dqtest\-index\(dq}), collname); ASSERT (bson_init_from_json (&cmd, cmd_str, \-1, &error)); bson_free (cmd_str); } if (!mongoc_collection_command_simple (coll, &cmd, NULL /* read_prefs */, NULL /* reply */, &error)) { bson_destroy (&cmd); HANDLE_ERROR (\(dqFailed to run updateSearchIndex: %s\(dq, error.message); } printf (\(dqUpdated index: \e\(dqtest\-index\e\(dq\en\(dq); bson_destroy (&cmd); .EE .UNINDENT .UNINDENT .sp To drop an Atlas Search Index, use the \fBdropSearchIndex\fP command: .INDENT 0.0 .INDENT 3.5 .sp .EX bson_t cmd; // Create command. { char *cmd_str = bson_strdup_printf (BSON_STR ({\(dqdropSearchIndex\(dq : \(dq%s\(dq, \(dqname\(dq : \(dqtest\-index\(dq}), collname); ASSERT (bson_init_from_json (&cmd, cmd_str, \-1, &error)); bson_free (cmd_str); } if (!mongoc_collection_command_simple (coll, &cmd, NULL /* read_prefs */, NULL /* reply */, &error)) { bson_destroy (&cmd); HANDLE_ERROR (\(dqFailed to run dropSearchIndex: %s\(dq, error.message); } printf (\(dqDropped index: \e\(dqtest\-index\e\(dq\en\(dq); bson_destroy (&cmd); .EE .UNINDENT .UNINDENT .sp For a full example, see \fI\%example\-manage\-search\-indexes.c\fP\&. .SH AIDS FOR DEBUGGING .SS GDB .sp This repository contains a \fB\&.gdbinit\fP file that contains helper functions to aid debugging of data structures. GDB will load this file \fI\%automatically\fP if you have added the directory which contains the \fI\&.gdbinit\fP file to GDB\(aqs \fI\%auto\-load safe\-path\fP, \fIand\fP you start GDB from the directory which holds the \fI\&.gdbinit\fP file. .sp You can see the safe\-path with \fBshow auto\-load safe\-path\fP on a GDB prompt. You can configure it by setting it in \fB~/.gdbinit\fP with: .INDENT 0.0 .INDENT 3.5 .sp .EX add\-auto\-load\-safe\-path /path/to/mongo\-c\-driver .EE .UNINDENT .UNINDENT .sp If you haven\(aqt added the path to your auto\-load safe\-path, or start GDB in another directory, load the file with: .INDENT 0.0 .INDENT 3.5 .sp .EX source path/to/mongo\-c\-driver/.gdbinit .EE .UNINDENT .UNINDENT .sp The \fB\&.gdbinit\fP file defines the \fBprintbson\fP function, which shows the contents of a \fBbson_t *\fP variable. If you have a local \fBbson_t\fP, then you must prefix the variable with a \fI&\fP\&. .sp An example GDB session looks like: .INDENT 0.0 .INDENT 3.5 .sp .EX (gdb) printbson bson ALLOC [0x555556cd7310 + 0] (len=475) { \(aqbool\(aq : true, \(aqint32\(aq : NumberInt(\(dq42\(dq), \(aqint64\(aq : NumberLong(\(dq3000000042\(dq), \(aqstring\(aq : \(dqStŕìñg\(dq, \(aqobjectId\(aq : ObjectID(\(dq5A1442F3122D331C3C6757E1\(dq), \(aqutcDateTime\(aq : UTCDateTime(1511277299031), \(aqarrayOfInts\(aq : [ \(aq0\(aq : NumberInt(\(dq1\(dq), \(aq1\(aq : NumberInt(\(dq2\(dq) ], \(aqembeddedDocument\(aq : { \(aqarrayOfStrings\(aq : [ \(aq0\(aq : \(dqone\(dq, \(aq1\(aq : \(dqtwo\(dq ], \(aqdouble\(aq : 2.718280, \(aqnotherDoc\(aq : { \(aqtrue\(aq : NumberInt(\(dq1\(dq), \(aqfalse\(aq : false } }, \(aqbinary\(aq : Binary(\(dq02\(dq, \(dq3031343532333637\(dq), \(aqregex\(aq : Regex(\(dq@[a\-z]+@\(dq, \(dqim\(dq), \(aqnull\(aq : null, \(aqjs\(aq : JavaScript(\(dqprint foo\(dq), \(aqjsws\(aq : JavaScript(\(dqprint foo\(dq) with scope: { \(aqf\(aq : NumberInt(\(dq42\(dq), \(aqa\(aq : [ \(aq0\(aq : 3.141593, \(aq1\(aq : 2.718282 ] }, \(aqtimestamp\(aq : Timestamp(4294967295, 4294967295), \(aqdouble\(aq : 3.141593 } .EE .UNINDENT .UNINDENT .SS LLDB .sp The mongo\-c\-driver repository contains a script \fBlldb_bson.py\fP that can be imported into an LLDB sessions and allows rich inspection of BSON values. .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 The \fBlldb_bson.py\fP module requires an LLDB with Python 3.8 or newer. .UNINDENT .UNINDENT .sp To activate the script, import it from the LLDB command line: .INDENT 0.0 .INDENT 3.5 .sp .EX (lldb) command script import /path/to/mongo\-c\-driver/lldb_bson.py .EE .UNINDENT .UNINDENT .sp Upon success, the message \fBlldb_bson is ready\fP will be printed to the LLDB console. .sp The import of this script can be made automatic by adding the command to an \fB\&.lldbinit\fP file. For example: Create a file \fB~/.lldbinit\fP containing: .INDENT 0.0 .INDENT 3.5 .sp .EX command script import /path/to/mongo\-c\-driver/lldb_bson.py .EE .UNINDENT .UNINDENT .sp The docstring at the top of the \fBlldb_bson.py\fP file contains more information on the capabilities of the module. .SS Debug assertions .sp To enable runtime debug assertions, configure with \fB\-DENABLE_DEBUG_ASSERTIONS=ON\fP\&. .SH IN-USE ENCRYPTION .sp In\-Use Encryption consists of two features: .SS Client\-Side Field Level Encryption .sp New in MongoDB 4.2, Client\-Side Field Level Encryption (also referred to as CSFLE) allows administrators and developers to encrypt specific data fields in addition to other MongoDB encryption features. .sp With CSFLE, developers can encrypt fields client side without any server\-side configuration or directives. CSFLE supports workloads where applications must guarantee that unauthorized parties, including server administrators, cannot read the encrypted data. .sp Automatic encryption, where sensitive fields in commands are encrypted automatically, requires an Enterprise\-only dependency for Query Analysis. See \fI\%In\-Use Encryption\fP for more information. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf The MongoDB Manual for \fI\%Client\-Side Field Level Encryption\fP .fi .sp .UNINDENT .UNINDENT .SS Automatic Client\-Side Field Level Encryption .sp Automatic encryption is enabled by calling \fI\%mongoc_client_enable_auto_encryption()\fP on a \fI\%mongoc_client_t\fP\&. The following examples show how to set up automatic encryption using \fI\%mongoc_client_encryption_t\fP to create a new encryption data key. .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 Automatic encryption requires MongoDB 4.2 enterprise or a MongoDB 4.2 Atlas cluster. The community version of the server supports automatic decryption as well as \fI\%Explicit Encryption\fP\&. .UNINDENT .UNINDENT .SS Providing Local Automatic Encryption Rules .sp The following example shows how to specify automatic encryption rules using a schema map set with \fI\%mongoc_auto_encryption_opts_set_schema_map()\fP\&. The automatic encryption rules are expressed using a strict subset of the JSON Schema syntax. .sp Supplying a schema map provides more security than relying on JSON Schemas obtained from the server. It protects against a malicious server advertising a false JSON Schema, which could trick the client into sending unencrypted data that should be encrypted. .sp JSON Schemas supplied in the schema map only apply to configuring automatic encryption. Other validation rules in the JSON schema will not be enforced by the driver and will result in an error: .sp client\-side\-encryption\-schema\-map.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include #include \(dqclient\-side\-encryption\-helpers.h\(dq /* Helper method to create a new data key in the key vault, a schema to use that * key, and writes the schema to a file for later use. */ static bool create_schema_file (bson_t *kms_providers, const char *keyvault_db, const char *keyvault_coll, mongoc_client_t *keyvault_client, bson_error_t *error) { mongoc_client_encryption_t *client_encryption = NULL; mongoc_client_encryption_opts_t *client_encryption_opts = NULL; mongoc_client_encryption_datakey_opts_t *datakey_opts = NULL; bson_value_t datakey_id = {0}; char *keyaltnames[] = {\(dqmongoc_encryption_example_1\(dq}; bson_t *schema = NULL; char *schema_string = NULL; size_t schema_string_len; FILE *outfile = NULL; bool ret = false; client_encryption_opts = mongoc_client_encryption_opts_new (); mongoc_client_encryption_opts_set_kms_providers (client_encryption_opts, kms_providers); mongoc_client_encryption_opts_set_keyvault_namespace (client_encryption_opts, keyvault_db, keyvault_coll); mongoc_client_encryption_opts_set_keyvault_client (client_encryption_opts, keyvault_client); client_encryption = mongoc_client_encryption_new (client_encryption_opts, error); if (!client_encryption) { goto fail; } /* Create a new data key and json schema for the encryptedField. * https://dochub.mongodb.org/core/client\-side\-field\-level\-encryption\-automatic\-encryption\-rules */ datakey_opts = mongoc_client_encryption_datakey_opts_new (); mongoc_client_encryption_datakey_opts_set_keyaltnames (datakey_opts, keyaltnames, 1); if (!mongoc_client_encryption_create_datakey (client_encryption, \(dqlocal\(dq, datakey_opts, &datakey_id, error)) { goto fail; } /* Create a schema describing that \(dqencryptedField\(dq is a string encrypted * with the newly created data key using deterministic encryption. */ schema = BCON_NEW ( \(dqproperties\(dq, \(dq{\(dq, \(dqencryptedField\(dq, \(dq{\(dq, \(dqencrypt\(dq, \(dq{\(dq, \(dqkeyId\(dq, \(dq[\(dq, BCON_BIN (datakey_id.value.v_binary.subtype, datakey_id.value.v_binary.data, datakey_id.value.v_binary.data_len), \(dq]\(dq, \(dqbsonType\(dq, \(dqstring\(dq, \(dqalgorithm\(dq, MONGOC_AEAD_AES_256_CBC_HMAC_SHA_512_DETERMINISTIC, \(dq}\(dq, \(dq}\(dq, \(dq}\(dq, \(dqbsonType\(dq, \(dqobject\(dq); /* Use canonical JSON so that other drivers and tools will be * able to parse the MongoDB extended JSON file. */ schema_string = bson_as_canonical_extended_json (schema, &schema_string_len); outfile = fopen (\(dqjsonSchema.json\(dq, \(dqw\(dq); if (0 == fwrite (schema_string, sizeof (char), schema_string_len, outfile)) { fprintf (stderr, \(dqfailed to write to file\en\(dq); goto fail; } ret = true; fail: mongoc_client_encryption_destroy (client_encryption); mongoc_client_encryption_datakey_opts_destroy (datakey_opts); mongoc_client_encryption_opts_destroy (client_encryption_opts); bson_free (schema_string); bson_destroy (schema); bson_value_destroy (&datakey_id); if (outfile) { fclose (outfile); } return ret; } /* This example demonstrates how to use automatic encryption with a client\-side * schema map using the enterprise version of MongoDB */ int main (void) { /* The collection used to store the encryption data keys. */ #define KEYVAULT_DB \(dqencryption\(dq #define KEYVAULT_COLL \(dq__libmongocTestKeyVault\(dq /* The collection used to store the encrypted documents in this example. */ #define ENCRYPTED_DB \(dqtest\(dq #define ENCRYPTED_COLL \(dqcoll\(dq int exit_status = EXIT_FAILURE; bool ret; uint8_t *local_masterkey = NULL; uint32_t local_masterkey_len; bson_t *kms_providers = NULL; bson_error_t error = {0}; bson_t *index_keys = NULL; bson_t *index_opts = NULL; mongoc_index_model_t *index_model = NULL; bson_json_reader_t *reader = NULL; bson_t schema = BSON_INITIALIZER; bson_t *schema_map = NULL; /* The MongoClient used to access the key vault (keyvault_namespace). */ mongoc_client_t *keyvault_client = NULL; mongoc_collection_t *keyvault_coll = NULL; mongoc_auto_encryption_opts_t *auto_encryption_opts = NULL; mongoc_client_t *client = NULL; mongoc_collection_t *coll = NULL; bson_t *to_insert = NULL; mongoc_client_t *unencrypted_client = NULL; mongoc_collection_t *unencrypted_coll = NULL; mongoc_init (); /* Configure the master key. This must be the same master key that was used * to create the encryption key. */ local_masterkey = hex_to_bin (getenv (\(dqLOCAL_MASTERKEY\(dq), &local_masterkey_len); if (!local_masterkey || local_masterkey_len != 96) { fprintf (stderr, \(dqSpecify LOCAL_MASTERKEY environment variable as a \(dq \(dqsecure random 96 byte hex value.\en\(dq); goto fail; } kms_providers = BCON_NEW (\(dqlocal\(dq, \(dq{\(dq, \(dqkey\(dq, BCON_BIN (0, local_masterkey, local_masterkey_len), \(dq}\(dq); /* Set up the key vault for this example. */ keyvault_client = mongoc_client_new (\(dqmongodb://localhost/?appname=client\-side\-encryption\-keyvault\(dq); BSON_ASSERT (keyvault_client); keyvault_coll = mongoc_client_get_collection (keyvault_client, KEYVAULT_DB, KEYVAULT_COLL); mongoc_collection_drop (keyvault_coll, NULL); /* Create a unique index to ensure that two data keys cannot share the same * keyAltName. This is recommended practice for the key vault. */ index_keys = BCON_NEW (\(dqkeyAltNames\(dq, BCON_INT32 (1)); index_opts = BCON_NEW (\(dqunique\(dq, BCON_BOOL (true), \(dqpartialFilterExpression\(dq, \(dq{\(dq, \(dqkeyAltNames\(dq, \(dq{\(dq, \(dq$exists\(dq, BCON_BOOL (true), \(dq}\(dq, \(dq}\(dq); index_model = mongoc_index_model_new (index_keys, index_opts); ret = mongoc_collection_create_indexes_with_opts ( keyvault_coll, &index_model, 1, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } /* Create a new data key and a schema using it for encryption. Save the * schema to the file jsonSchema.json */ ret = create_schema_file (kms_providers, KEYVAULT_DB, KEYVAULT_COLL, keyvault_client, &error); if (!ret) { goto fail; } /* Load the JSON Schema and construct the local schema_map option. */ reader = bson_json_reader_new_from_file (\(dqjsonSchema.json\(dq, &error); if (!reader) { goto fail; } bson_json_reader_read (reader, &schema, &error); /* Construct the schema map, mapping the namespace of the collection to the * schema describing encryption. */ schema_map = BCON_NEW (ENCRYPTED_DB \(dq.\(dq ENCRYPTED_COLL, BCON_DOCUMENT (&schema)); auto_encryption_opts = mongoc_auto_encryption_opts_new (); mongoc_auto_encryption_opts_set_keyvault_client (auto_encryption_opts, keyvault_client); mongoc_auto_encryption_opts_set_keyvault_namespace (auto_encryption_opts, KEYVAULT_DB, KEYVAULT_COLL); mongoc_auto_encryption_opts_set_kms_providers (auto_encryption_opts, kms_providers); mongoc_auto_encryption_opts_set_schema_map (auto_encryption_opts, schema_map); client = mongoc_client_new (\(dqmongodb://localhost/?appname=client\-side\-encryption\(dq); BSON_ASSERT (client); /* Enable automatic encryption. It will determine that encryption is * necessary from the schema map instead of relying on the server to provide * a schema. */ ret = mongoc_client_enable_auto_encryption (client, auto_encryption_opts, &error); if (!ret) { goto fail; } coll = mongoc_client_get_collection (client, ENCRYPTED_DB, ENCRYPTED_COLL); /* Clear old data */ mongoc_collection_drop (coll, NULL); to_insert = BCON_NEW (\(dqencryptedField\(dq, \(dq123456789\(dq); ret = mongoc_collection_insert_one (coll, to_insert, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } printf (\(dqdecrypted document: \(dq); if (!print_one_document (coll, &error)) { goto fail; } printf (\(dq\en\(dq); unencrypted_client = mongoc_client_new (\(dqmongodb://localhost/?appname=client\-side\-encryption\-unencrypted\(dq); BSON_ASSERT (unencrypted_client); unencrypted_coll = mongoc_client_get_collection (unencrypted_client, ENCRYPTED_DB, ENCRYPTED_COLL); printf (\(dqencrypted document: \(dq); if (!print_one_document (unencrypted_coll, &error)) { goto fail; } printf (\(dq\en\(dq); exit_status = EXIT_SUCCESS; fail: if (error.code) { fprintf (stderr, \(dqerror: %s\en\(dq, error.message); } bson_free (local_masterkey); bson_destroy (kms_providers); mongoc_collection_destroy (keyvault_coll); mongoc_index_model_destroy (index_model); bson_destroy (index_opts); bson_destroy (index_keys); bson_json_reader_destroy (reader); mongoc_auto_encryption_opts_destroy (auto_encryption_opts); mongoc_collection_destroy (coll); mongoc_client_destroy (client); bson_destroy (to_insert); mongoc_collection_destroy (unencrypted_coll); mongoc_client_destroy (unencrypted_client); mongoc_client_destroy (keyvault_client); bson_destroy (&schema); bson_destroy (schema_map); mongoc_cleanup (); return exit_status; } .EE .UNINDENT .UNINDENT .SS Server\-Side Field Level Encryption Enforcement .sp The MongoDB 4.2 server supports using schema validation to enforce encryption of specific fields in a collection. This schema validation will prevent an application from inserting unencrypted values for any fields marked with the \(dqencrypt\(dq JSON schema keyword. .sp The following example shows how to set up automatic encryption using \fI\%mongoc_client_encryption_t\fP to create a new encryption data key and create a collection with the necessary JSON Schema: .sp client\-side\-encryption\-server\-schema.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include #include \(dqclient\-side\-encryption\-helpers.h\(dq /* Helper method to create and return a JSON schema to use for encryption. The caller will use the returned schema for server\-side encryption validation. */ static bson_t * create_schema (bson_t *kms_providers, const char *keyvault_db, const char *keyvault_coll, mongoc_client_t *keyvault_client, bson_error_t *error) { mongoc_client_encryption_t *client_encryption = NULL; mongoc_client_encryption_opts_t *client_encryption_opts = NULL; mongoc_client_encryption_datakey_opts_t *datakey_opts = NULL; bson_value_t datakey_id = {0}; char *keyaltnames[] = {\(dqmongoc_encryption_example_2\(dq}; bson_t *schema = NULL; client_encryption_opts = mongoc_client_encryption_opts_new (); mongoc_client_encryption_opts_set_kms_providers (client_encryption_opts, kms_providers); mongoc_client_encryption_opts_set_keyvault_namespace (client_encryption_opts, keyvault_db, keyvault_coll); mongoc_client_encryption_opts_set_keyvault_client (client_encryption_opts, keyvault_client); client_encryption = mongoc_client_encryption_new (client_encryption_opts, error); if (!client_encryption) { goto fail; } /* Create a new data key and json schema for the encryptedField. * https://dochub.mongodb.org/core/client\-side\-field\-level\-encryption\-automatic\-encryption\-rules */ datakey_opts = mongoc_client_encryption_datakey_opts_new (); mongoc_client_encryption_datakey_opts_set_keyaltnames (datakey_opts, keyaltnames, 1); if (!mongoc_client_encryption_create_datakey (client_encryption, \(dqlocal\(dq, datakey_opts, &datakey_id, error)) { goto fail; } /* Create a schema describing that \(dqencryptedField\(dq is a string encrypted * with the newly created data key using deterministic encryption. */ schema = BCON_NEW ( \(dqproperties\(dq, \(dq{\(dq, \(dqencryptedField\(dq, \(dq{\(dq, \(dqencrypt\(dq, \(dq{\(dq, \(dqkeyId\(dq, \(dq[\(dq, BCON_BIN (datakey_id.value.v_binary.subtype, datakey_id.value.v_binary.data, datakey_id.value.v_binary.data_len), \(dq]\(dq, \(dqbsonType\(dq, \(dqstring\(dq, \(dqalgorithm\(dq, MONGOC_AEAD_AES_256_CBC_HMAC_SHA_512_DETERMINISTIC, \(dq}\(dq, \(dq}\(dq, \(dq}\(dq, \(dqbsonType\(dq, \(dqobject\(dq); fail: mongoc_client_encryption_destroy (client_encryption); mongoc_client_encryption_datakey_opts_destroy (datakey_opts); mongoc_client_encryption_opts_destroy (client_encryption_opts); bson_value_destroy (&datakey_id); return schema; } /* This example demonstrates how to use automatic encryption with a server\-side * schema using the enterprise version of MongoDB */ int main (void) { /* The collection used to store the encryption data keys. */ #define KEYVAULT_DB \(dqencryption\(dq #define KEYVAULT_COLL \(dq__libmongocTestKeyVault\(dq /* The collection used to store the encrypted documents in this example. */ #define ENCRYPTED_DB \(dqtest\(dq #define ENCRYPTED_COLL \(dqcoll\(dq int exit_status = EXIT_FAILURE; bool ret; uint8_t *local_masterkey = NULL; uint32_t local_masterkey_len; bson_t *kms_providers = NULL; bson_error_t error = {0}; bson_t *index_keys = NULL; bson_t *index_opts = NULL; mongoc_index_model_t *index_model = NULL; bson_json_reader_t *reader = NULL; bson_t *schema = NULL; /* The MongoClient used to access the key vault (keyvault_namespace). */ mongoc_client_t *keyvault_client = NULL; mongoc_collection_t *keyvault_coll = NULL; mongoc_auto_encryption_opts_t *auto_encryption_opts = NULL; mongoc_client_t *client = NULL; mongoc_collection_t *coll = NULL; bson_t *to_insert = NULL; mongoc_client_t *unencrypted_client = NULL; mongoc_collection_t *unencrypted_coll = NULL; bson_t *create_cmd = NULL; bson_t *create_cmd_opts = NULL; mongoc_write_concern_t *wc = NULL; mongoc_init (); /* Configure the master key. This must be the same master key that was used * to create * the encryption key. */ local_masterkey = hex_to_bin (getenv (\(dqLOCAL_MASTERKEY\(dq), &local_masterkey_len); if (!local_masterkey || local_masterkey_len != 96) { fprintf (stderr, \(dqSpecify LOCAL_MASTERKEY environment variable as a \(dq \(dqsecure random 96 byte hex value.\en\(dq); goto fail; } kms_providers = BCON_NEW (\(dqlocal\(dq, \(dq{\(dq, \(dqkey\(dq, BCON_BIN (0, local_masterkey, local_masterkey_len), \(dq}\(dq); /* Set up the key vault for this example. */ keyvault_client = mongoc_client_new (\(dqmongodb://localhost/?appname=client\-side\-encryption\-keyvault\(dq); BSON_ASSERT (keyvault_client); keyvault_coll = mongoc_client_get_collection (keyvault_client, KEYVAULT_DB, KEYVAULT_COLL); mongoc_collection_drop (keyvault_coll, NULL); /* Create a unique index to ensure that two data keys cannot share the same * keyAltName. This is recommended practice for the key vault. */ index_keys = BCON_NEW (\(dqkeyAltNames\(dq, BCON_INT32 (1)); index_opts = BCON_NEW (\(dqunique\(dq, BCON_BOOL (true), \(dqpartialFilterExpression\(dq, \(dq{\(dq, \(dqkeyAltNames\(dq, \(dq{\(dq, \(dq$exists\(dq, BCON_BOOL (true), \(dq}\(dq, \(dq}\(dq); index_model = mongoc_index_model_new (index_keys, index_opts); ret = mongoc_collection_create_indexes_with_opts ( keyvault_coll, &index_model, 1, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } auto_encryption_opts = mongoc_auto_encryption_opts_new (); mongoc_auto_encryption_opts_set_keyvault_client (auto_encryption_opts, keyvault_client); mongoc_auto_encryption_opts_set_keyvault_namespace (auto_encryption_opts, KEYVAULT_DB, KEYVAULT_COLL); mongoc_auto_encryption_opts_set_kms_providers (auto_encryption_opts, kms_providers); schema = create_schema (kms_providers, KEYVAULT_DB, KEYVAULT_COLL, keyvault_client, &error); if (!schema) { goto fail; } client = mongoc_client_new (\(dqmongodb://localhost/?appname=client\-side\-encryption\(dq); BSON_ASSERT (client); ret = mongoc_client_enable_auto_encryption (client, auto_encryption_opts, &error); if (!ret) { goto fail; } coll = mongoc_client_get_collection (client, ENCRYPTED_DB, ENCRYPTED_COLL); /* Clear old data */ mongoc_collection_drop (coll, NULL); /* Create the collection with the encryption JSON Schema. */ create_cmd = BCON_NEW (\(dqcreate\(dq, ENCRYPTED_COLL, \(dqvalidator\(dq, \(dq{\(dq, \(dq$jsonSchema\(dq, BCON_DOCUMENT (schema), \(dq}\(dq); wc = mongoc_write_concern_new (); mongoc_write_concern_set_wmajority (wc, 0); create_cmd_opts = bson_new (); mongoc_write_concern_append (wc, create_cmd_opts); ret = mongoc_client_command_with_opts ( client, ENCRYPTED_DB, create_cmd, NULL /* read prefs */, create_cmd_opts, NULL /* reply */, &error); if (!ret) { goto fail; } to_insert = BCON_NEW (\(dqencryptedField\(dq, \(dq123456789\(dq); ret = mongoc_collection_insert_one (coll, to_insert, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } printf (\(dqdecrypted document: \(dq); if (!print_one_document (coll, &error)) { goto fail; } printf (\(dq\en\(dq); unencrypted_client = mongoc_client_new (\(dqmongodb://localhost/?appname=client\-side\-encryption\-unencrypted\(dq); BSON_ASSERT (unencrypted_client); unencrypted_coll = mongoc_client_get_collection (unencrypted_client, ENCRYPTED_DB, ENCRYPTED_COLL); printf (\(dqencrypted document: \(dq); if (!print_one_document (unencrypted_coll, &error)) { goto fail; } printf (\(dq\en\(dq); /* Expect a server\-side error if inserting with the unencrypted collection. */ ret = mongoc_collection_insert_one (unencrypted_coll, to_insert, NULL /* opts */, NULL /* reply */, &error); if (!ret) { printf (\(dqinsert with unencrypted collection failed: %s\en\(dq, error.message); memset (&error, 0, sizeof (error)); } exit_status = EXIT_SUCCESS; fail: if (error.code) { fprintf (stderr, \(dqerror: %s\en\(dq, error.message); } bson_free (local_masterkey); bson_destroy (kms_providers); mongoc_collection_destroy (keyvault_coll); mongoc_index_model_destroy (index_model); bson_destroy (index_opts); bson_destroy (index_keys); bson_json_reader_destroy (reader); mongoc_auto_encryption_opts_destroy (auto_encryption_opts); mongoc_collection_destroy (coll); mongoc_client_destroy (client); bson_destroy (to_insert); mongoc_collection_destroy (unencrypted_coll); mongoc_client_destroy (unencrypted_client); mongoc_client_destroy (keyvault_client); bson_destroy (schema); bson_destroy (create_cmd); bson_destroy (create_cmd_opts); mongoc_write_concern_destroy (wc); mongoc_cleanup (); return exit_status; } .EE .UNINDENT .UNINDENT .SS Explicit Encryption .sp Explicit encryption is a MongoDB community feature and does not use \fI\%Query Analysis\fP (\fBmongocryptd\fP or \fBcrypt_shared\fP). Explicit encryption is provided by the \fI\%mongoc_client_encryption_t\fP class, for example: .sp client\-side\-encryption\-explicit.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include #include \(dqclient\-side\-encryption\-helpers.h\(dq /* This example demonstrates how to use explicit encryption and decryption using * the community version of MongoDB */ int main (void) { /* The collection used to store the encryption data keys. */ #define KEYVAULT_DB \(dqencryption\(dq #define KEYVAULT_COLL \(dq__libmongocTestKeyVault\(dq /* The collection used to store the encrypted documents in this example. */ #define ENCRYPTED_DB \(dqtest\(dq #define ENCRYPTED_COLL \(dqcoll\(dq int exit_status = EXIT_FAILURE; bool ret; uint8_t *local_masterkey = NULL; uint32_t local_masterkey_len; bson_t *kms_providers = NULL; bson_error_t error = {0}; bson_t *index_keys = NULL; bson_t *index_opts = NULL; mongoc_index_model_t *index_model = NULL; bson_t *schema = NULL; mongoc_client_t *client = NULL; mongoc_collection_t *coll = NULL; mongoc_collection_t *keyvault_coll = NULL; bson_t *to_insert = NULL; bson_t *create_cmd = NULL; bson_t *create_cmd_opts = NULL; mongoc_write_concern_t *wc = NULL; mongoc_client_encryption_t *client_encryption = NULL; mongoc_client_encryption_opts_t *client_encryption_opts = NULL; mongoc_client_encryption_datakey_opts_t *datakey_opts = NULL; char *keyaltnames[] = {\(dqmongoc_encryption_example_3\(dq}; bson_value_t datakey_id = {0}; bson_value_t encrypted_field = {0}; bson_value_t to_encrypt = {0}; mongoc_client_encryption_encrypt_opts_t *encrypt_opts = NULL; bson_value_t decrypted = {0}; mongoc_init (); /* Configure the master key. This must be the same master key that was used * to create the encryption key. */ local_masterkey = hex_to_bin (getenv (\(dqLOCAL_MASTERKEY\(dq), &local_masterkey_len); if (!local_masterkey || local_masterkey_len != 96) { fprintf (stderr, \(dqSpecify LOCAL_MASTERKEY environment variable as a \(dq \(dqsecure random 96 byte hex value.\en\(dq); goto fail; } kms_providers = BCON_NEW (\(dqlocal\(dq, \(dq{\(dq, \(dqkey\(dq, BCON_BIN (0, local_masterkey, local_masterkey_len), \(dq}\(dq); /* The mongoc_client_t used to read/write application data. */ client = mongoc_client_new (\(dqmongodb://localhost/?appname=client\-side\-encryption\(dq); coll = mongoc_client_get_collection (client, ENCRYPTED_DB, ENCRYPTED_COLL); /* Clear old data */ mongoc_collection_drop (coll, NULL); /* Set up the key vault for this example. */ keyvault_coll = mongoc_client_get_collection (client, KEYVAULT_DB, KEYVAULT_COLL); mongoc_collection_drop (keyvault_coll, NULL); /* Create a unique index to ensure that two data keys cannot share the same * keyAltName. This is recommended practice for the key vault. */ index_keys = BCON_NEW (\(dqkeyAltNames\(dq, BCON_INT32 (1)); index_opts = BCON_NEW (\(dqunique\(dq, BCON_BOOL (true), \(dqpartialFilterExpression\(dq, \(dq{\(dq, \(dqkeyAltNames\(dq, \(dq{\(dq, \(dq$exists\(dq, BCON_BOOL (true), \(dq}\(dq, \(dq}\(dq); index_model = mongoc_index_model_new (index_keys, index_opts); ret = mongoc_collection_create_indexes_with_opts ( keyvault_coll, &index_model, 1, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } client_encryption_opts = mongoc_client_encryption_opts_new (); mongoc_client_encryption_opts_set_kms_providers (client_encryption_opts, kms_providers); mongoc_client_encryption_opts_set_keyvault_namespace (client_encryption_opts, KEYVAULT_DB, KEYVAULT_COLL); /* Set a mongoc_client_t to use for reading/writing to the key vault. This * can be the same mongoc_client_t used by the main application. */ mongoc_client_encryption_opts_set_keyvault_client (client_encryption_opts, client); client_encryption = mongoc_client_encryption_new (client_encryption_opts, &error); if (!client_encryption) { goto fail; } /* Create a new data key for the encryptedField. * https://dochub.mongodb.org/core/client\-side\-field\-level\-encryption\-automatic\-encryption\-rules */ datakey_opts = mongoc_client_encryption_datakey_opts_new (); mongoc_client_encryption_datakey_opts_set_keyaltnames (datakey_opts, keyaltnames, 1); if (!mongoc_client_encryption_create_datakey (client_encryption, \(dqlocal\(dq, datakey_opts, &datakey_id, &error)) { goto fail; } /* Explicitly encrypt a field */ encrypt_opts = mongoc_client_encryption_encrypt_opts_new (); mongoc_client_encryption_encrypt_opts_set_algorithm (encrypt_opts, MONGOC_AEAD_AES_256_CBC_HMAC_SHA_512_DETERMINISTIC); mongoc_client_encryption_encrypt_opts_set_keyid (encrypt_opts, &datakey_id); to_encrypt.value_type = BSON_TYPE_UTF8; to_encrypt.value.v_utf8.str = \(dq123456789\(dq; const size_t len = strlen (to_encrypt.value.v_utf8.str); BSON_ASSERT (bson_in_range_unsigned (uint32_t, len)); to_encrypt.value.v_utf8.len = (uint32_t) len; ret = mongoc_client_encryption_encrypt (client_encryption, &to_encrypt, encrypt_opts, &encrypted_field, &error); if (!ret) { goto fail; } to_insert = bson_new (); BSON_APPEND_VALUE (to_insert, \(dqencryptedField\(dq, &encrypted_field); ret = mongoc_collection_insert_one (coll, to_insert, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } printf (\(dqencrypted document: \(dq); if (!print_one_document (coll, &error)) { goto fail; } printf (\(dq\en\(dq); /* Explicitly decrypt a field */ ret = mongoc_client_encryption_decrypt (client_encryption, &encrypted_field, &decrypted, &error); if (!ret) { goto fail; } printf (\(dqdecrypted value: %s\en\(dq, decrypted.value.v_utf8.str); exit_status = EXIT_SUCCESS; fail: if (error.code) { fprintf (stderr, \(dqerror: %s\en\(dq, error.message); } bson_free (local_masterkey); bson_destroy (kms_providers); mongoc_collection_destroy (keyvault_coll); mongoc_index_model_destroy (index_model); bson_destroy (index_opts); bson_destroy (index_keys); mongoc_collection_destroy (coll); mongoc_client_destroy (client); bson_destroy (to_insert); bson_destroy (schema); bson_destroy (create_cmd); bson_destroy (create_cmd_opts); mongoc_write_concern_destroy (wc); mongoc_client_encryption_destroy (client_encryption); mongoc_client_encryption_datakey_opts_destroy (datakey_opts); mongoc_client_encryption_opts_destroy (client_encryption_opts); bson_value_destroy (&encrypted_field); mongoc_client_encryption_encrypt_opts_destroy (encrypt_opts); bson_value_destroy (&decrypted); bson_value_destroy (&datakey_id); mongoc_cleanup (); return exit_status; } .EE .UNINDENT .UNINDENT .SS Explicit Encryption with Automatic Decryption .sp Although automatic encryption requires MongoDB 4.2 enterprise or a MongoDB 4.2 Atlas cluster, automatic decryption is supported for all users. To configure automatic decryption without automatic encryption set bypass_auto_encryption=True in \fI\%mongoc_auto_encryption_opts_t\fP: .sp client\-side\-encryption\-auto\-decryption.c .INDENT 0.0 .INDENT 3.5 .sp .EX #include #include #include #include \(dqclient\-side\-encryption\-helpers.h\(dq /* This example demonstrates how to set up automatic decryption without * automatic encryption using the community version of MongoDB */ int main (void) { /* The collection used to store the encryption data keys. */ #define KEYVAULT_DB \(dqencryption\(dq #define KEYVAULT_COLL \(dq__libmongocTestKeyVault\(dq /* The collection used to store the encrypted documents in this example. */ #define ENCRYPTED_DB \(dqtest\(dq #define ENCRYPTED_COLL \(dqcoll\(dq int exit_status = EXIT_FAILURE; bool ret; uint8_t *local_masterkey = NULL; uint32_t local_masterkey_len; bson_t *kms_providers = NULL; bson_error_t error = {0}; bson_t *index_keys = NULL; bson_t *index_opts = NULL; mongoc_index_model_t *index_model = NULL; bson_t *schema = NULL; mongoc_client_t *client = NULL; mongoc_collection_t *coll = NULL; mongoc_collection_t *keyvault_coll = NULL; bson_t *to_insert = NULL; bson_t *create_cmd = NULL; bson_t *create_cmd_opts = NULL; mongoc_write_concern_t *wc = NULL; mongoc_client_encryption_t *client_encryption = NULL; mongoc_client_encryption_opts_t *client_encryption_opts = NULL; mongoc_client_encryption_datakey_opts_t *datakey_opts = NULL; char *keyaltnames[] = {\(dqmongoc_encryption_example_4\(dq}; bson_value_t datakey_id = {0}; bson_value_t encrypted_field = {0}; bson_value_t to_encrypt = {0}; mongoc_client_encryption_encrypt_opts_t *encrypt_opts = NULL; bson_value_t decrypted = {0}; mongoc_auto_encryption_opts_t *auto_encryption_opts = NULL; mongoc_client_t *unencrypted_client = NULL; mongoc_collection_t *unencrypted_coll = NULL; mongoc_init (); /* Configure the master key. This must be the same master key that was used * to create the encryption key. */ local_masterkey = hex_to_bin (getenv (\(dqLOCAL_MASTERKEY\(dq), &local_masterkey_len); if (!local_masterkey || local_masterkey_len != 96) { fprintf (stderr, \(dqSpecify LOCAL_MASTERKEY environment variable as a \(dq \(dqsecure random 96 byte hex value.\en\(dq); goto fail; } kms_providers = BCON_NEW (\(dqlocal\(dq, \(dq{\(dq, \(dqkey\(dq, BCON_BIN (0, local_masterkey, local_masterkey_len), \(dq}\(dq); client = mongoc_client_new (\(dqmongodb://localhost/?appname=client\-side\-encryption\(dq); auto_encryption_opts = mongoc_auto_encryption_opts_new (); mongoc_auto_encryption_opts_set_keyvault_namespace (auto_encryption_opts, KEYVAULT_DB, KEYVAULT_COLL); mongoc_auto_encryption_opts_set_kms_providers (auto_encryption_opts, kms_providers); /* Setting bypass_auto_encryption to true disables automatic encryption but * keeps the automatic decryption behavior. bypass_auto_encryption will also * disable spawning mongocryptd */ mongoc_auto_encryption_opts_set_bypass_auto_encryption (auto_encryption_opts, true); /* Once bypass_auto_encryption is set, community users can enable auto * encryption on the client. This will, in fact, only perform automatic * decryption. */ ret = mongoc_client_enable_auto_encryption (client, auto_encryption_opts, &error); if (!ret) { goto fail; } /* Now that automatic decryption is on, we can test it by inserting a * document with an explicitly encrypted value into the collection. When we * look up the document later, it should be automatically decrypted for us. */ coll = mongoc_client_get_collection (client, ENCRYPTED_DB, ENCRYPTED_COLL); /* Clear old data */ mongoc_collection_drop (coll, NULL); /* Set up the key vault for this example. */ keyvault_coll = mongoc_client_get_collection (client, KEYVAULT_DB, KEYVAULT_COLL); mongoc_collection_drop (keyvault_coll, NULL); /* Create a unique index to ensure that two data keys cannot share the same * keyAltName. This is recommended practice for the key vault. */ index_keys = BCON_NEW (\(dqkeyAltNames\(dq, BCON_INT32 (1)); index_opts = BCON_NEW (\(dqunique\(dq, BCON_BOOL (true), \(dqpartialFilterExpression\(dq, \(dq{\(dq, \(dqkeyAltNames\(dq, \(dq{\(dq, \(dq$exists\(dq, BCON_BOOL (true), \(dq}\(dq, \(dq}\(dq); index_model = mongoc_index_model_new (index_keys, index_opts); ret = mongoc_collection_create_indexes_with_opts ( keyvault_coll, &index_model, 1, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } client_encryption_opts = mongoc_client_encryption_opts_new (); mongoc_client_encryption_opts_set_kms_providers (client_encryption_opts, kms_providers); mongoc_client_encryption_opts_set_keyvault_namespace (client_encryption_opts, KEYVAULT_DB, KEYVAULT_COLL); /* The key vault client is used for reading to/from the key vault. This can * be the same mongoc_client_t used by the application. */ mongoc_client_encryption_opts_set_keyvault_client (client_encryption_opts, client); client_encryption = mongoc_client_encryption_new (client_encryption_opts, &error); if (!client_encryption) { goto fail; } /* Create a new data key for the encryptedField. * https://dochub.mongodb.org/core/client\-side\-field\-level\-encryption\-automatic\-encryption\-rules */ datakey_opts = mongoc_client_encryption_datakey_opts_new (); mongoc_client_encryption_datakey_opts_set_keyaltnames (datakey_opts, keyaltnames, 1); ret = mongoc_client_encryption_create_datakey (client_encryption, \(dqlocal\(dq, datakey_opts, &datakey_id, &error); if (!ret) { goto fail; } /* Explicitly encrypt a field. */ encrypt_opts = mongoc_client_encryption_encrypt_opts_new (); mongoc_client_encryption_encrypt_opts_set_algorithm (encrypt_opts, MONGOC_AEAD_AES_256_CBC_HMAC_SHA_512_DETERMINISTIC); mongoc_client_encryption_encrypt_opts_set_keyaltname (encrypt_opts, \(dqmongoc_encryption_example_4\(dq); to_encrypt.value_type = BSON_TYPE_UTF8; to_encrypt.value.v_utf8.str = \(dq123456789\(dq; const size_t len = strlen (to_encrypt.value.v_utf8.str); BSON_ASSERT (bson_in_range_unsigned (uint32_t, len)); to_encrypt.value.v_utf8.len = (uint32_t) len; ret = mongoc_client_encryption_encrypt (client_encryption, &to_encrypt, encrypt_opts, &encrypted_field, &error); if (!ret) { goto fail; } to_insert = bson_new (); BSON_APPEND_VALUE (to_insert, \(dqencryptedField\(dq, &encrypted_field); ret = mongoc_collection_insert_one (coll, to_insert, NULL /* opts */, NULL /* reply */, &error); if (!ret) { goto fail; } /* When we retrieve the document, any encrypted fields will get automatically * decrypted by the driver. */ printf (\(dqdecrypted document: \(dq); if (!print_one_document (coll, &error)) { goto fail; } printf (\(dq\en\(dq); unencrypted_client = mongoc_client_new (\(dqmongodb://localhost/?appname=client\-side\-encryption\(dq); unencrypted_coll = mongoc_client_get_collection (unencrypted_client, ENCRYPTED_DB, ENCRYPTED_COLL); printf (\(dqencrypted document: \(dq); if (!print_one_document (unencrypted_coll, &error)) { goto fail; } printf (\(dq\en\(dq); exit_status = EXIT_SUCCESS; fail: if (error.code) { fprintf (stderr, \(dqerror: %s\en\(dq, error.message); } bson_free (local_masterkey); bson_destroy (kms_providers); mongoc_collection_destroy (keyvault_coll); mongoc_index_model_destroy (index_model); bson_destroy (index_opts); bson_destroy (index_keys); mongoc_collection_destroy (coll); mongoc_client_destroy (client); bson_destroy (to_insert); bson_destroy (schema); bson_destroy (create_cmd); bson_destroy (create_cmd_opts); mongoc_write_concern_destroy (wc); mongoc_client_encryption_destroy (client_encryption); mongoc_client_encryption_datakey_opts_destroy (datakey_opts); mongoc_client_encryption_opts_destroy (client_encryption_opts); bson_value_destroy (&encrypted_field); mongoc_client_encryption_encrypt_opts_destroy (encrypt_opts); bson_value_destroy (&decrypted); bson_value_destroy (&datakey_id); mongoc_collection_destroy (unencrypted_coll); mongoc_client_destroy (unencrypted_client); mongoc_auto_encryption_opts_destroy (auto_encryption_opts); mongoc_cleanup (); return exit_status; } .EE .UNINDENT .UNINDENT .SS Queryable Encryption .sp Using Queryable Encryption requires MongoDB Server 7.0 or higher. .sp See the MongoDB Manual for \fI\%Queryable Encryption\fP for more information about the feature. .sp API related to the \(dqrangePreview\(dq algorithm is still experimental and subject to breaking changes! .SS Queryable Encryption in older MongoDB Server versions .sp MongoDB Server 6.0 introduced Queryable Encryption as a Public Technical Preview. MongoDB Server 7.0 includes backwards breaking changes to the Queryable Encryption protocol. .sp The backwards breaking changes are applied in the client protocol in libmongocrypt 1.8.0. libmongoc 1.24.0 requires libmongocrypt 1.8.0 or newer. libmongoc 1.24.0 no longer supports Queryable Encryption in MongoDB Server <7.0. Using Queryable Encryption libmongoc 1.24.0 and higher requires MongoDB Server >=7.0. .sp Using Queryable Encryption with libmongocrypt<1.8.0 on a MongoDB Server>=7.0, or using libmongocrypt>=1.8.0 on a MongoDB Server<6.0 will result in a server error when using the incompatible protocol. .sp \fBSEE ALSO:\fP .INDENT 0.0 .INDENT 3.5 .nf The MongoDB Manual for \fI\%Queryable Encryption\fP .fi .sp .UNINDENT .UNINDENT .SS Installation .sp Using In\-Use Encryption in the C driver requires the dependency libmongocrypt. See the MongoDB Manual for \fI\%libmongocrypt installation instructions\fP\&. .sp Once libmongocrypt is installed, configure the C driver with \fB\-DENABLE_CLIENT_SIDE_ENCRYPTION=ON\fP to require In\-Use Encryption be enabled. .INDENT 0.0 .INDENT 3.5 .sp .EX $ cd mongo\-c\-driver $ mkdir cmake\-build && cd cmake\-build $ cmake \-DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF \-DENABLE_CLIENT_SIDE_ENCRYPTION=ON .. $ cmake \-\-build . \-\-target install .EE .UNINDENT .UNINDENT .SS API .sp \fI\%mongoc_client_encryption_t\fP is used for explicit encryption and key management. \fI\%mongoc_client_enable_auto_encryption()\fP and \fI\%mongoc_client_pool_enable_auto_encryption()\fP is used to enable automatic encryption. .sp The Queryable Encryption and CSFLE features share much of the same API with some exceptions. .INDENT 0.0 .IP \(bu 2 The supported algorithms documented in \fI\%mongoc_client_encryption_encrypt_opts_set_algorithm()\fP do not apply to both features. .IP \(bu 2 \fI\%mongoc_auto_encryption_opts_set_encrypted_fields_map()\fP only applies to Queryable Encryption. .IP \(bu 2 \fI\%mongoc_auto_encryption_opts_set_schema_map()\fP only applies to CSFLE. .UNINDENT .SS Query Analysis .sp To support the automatic encryption feature, one of the following dependencies are required: .INDENT 0.0 .IP \(bu 2 The \fBmongocryptd\fP executable. See the MongoDB Manual documentation: \fI\%Install and Configure mongocryptd\fP .IP \(bu 2 The \fBcrypt_shared\fP library. See the MongoDB Manual documentation: \fI\%Automatic Encryption Shared Library\fP .UNINDENT .sp A \fI\%mongoc_client_t\fP or \fI\%mongoc_client_pool_t\fP configured with auto encryption will automatically try to load the \fBcrypt_shared\fP library. If loading the \fBcrypt_shared\fP library fails, the \fI\%mongoc_client_t\fP or \fI\%mongoc_client_pool_t\fP will try to spawn the \fBmongocryptd\fP process from the application\(aqs \fBPATH\fP\&. To configure use of \fBcrypt_shared\fP and \fBmongocryptd\fP see \fI\%mongoc_auto_encryption_opts_set_extra()\fP\&. .SH AUTHOR MongoDB, Inc .SH COPYRIGHT 2017-present, MongoDB, Inc .\" Generated by docutils manpage writer. .