general options: --help print usage --version print the tool version and exit
verbosity options: -v, --verbose=<level> more detailed log output (include multiple times for more verbosity, e.g. -vvvvv, or specify a numeric value, e.g. --verbose=N) --quiet hide all log output
connection options: -h, --host=<hostname> mongodb host to connect to (setname/host1,host2 for replica sets) --port=<port> server port (can also use --host hostname:port)
ssl options: --ssl connect to a mongod or mongos that has ssl enabled --sslCAFile=<filename> the .pem file containing the root certificate chain from the certificate authority --sslPEMKeyFile=<filename> the .pem file containing the certificate and key --sslPEMKeyPassword=<password> the password to decrypt the sslPEMKeyFile, if necessary --sslCRLFile=<filename> the .pem file containing the certificate revocation list --sslAllowInvalidCertificates bypass the validation for server certificates --sslAllowInvalidHostnames bypass the validation for server name --sslFIPSMode use FIPS mode of the installed openssl library
authentication options: -u, --username=<username> username for authentication -p, --password=<password> password for authentication --authenticationDatabase=<database-name> database that holds the user's credentials --authenticationMechanism=<mechanism> authentication mechanism to use
namespace options: -d, --db=<database-name> database to use -c, --collection=<collection-name> collection to use
query options: -q, --query= query filter, as a JSON string, e.g., '{x:{$gt:1}}' --queryFile= path to a file containing a query filter (JSON) --readPreference=<string>|<json> specify either a preference name or a preference json object --forceTableScan force a table scan
output options: -o, --out=<directory-path> output directory, or '-' for stdout (defaults to 'dump') --gzip compress archive our collection output with Gzip --repair try to recover documents from damaged data files (not supported by all storage engines) --oplog use oplog for taking a point-in-time snapshot --archive=<file-path> dump as an archive to the specified path. If flag is specified without a value, archive is written to stdout --dumpDbUsersAndRoles dump user and role definitions for the specified database --excludeCollection=<collection-name> collection to exclude from the dump (may be specified multiple times to exclude additional collections) --excludeCollectionsWithPrefix=<collection-prefix> exclude all collections from the dump that have the given prefix (may be specified multiple times to exclude additional prefixes) -j, --numParallelCollections= number of collections to dump in parallel (4 by default) (default: 4) --viewsAsCollections dump views as normal collections with their produced data, omitting standard collections
namespace options: -d, --db=<database-name> database to use when restoring from a BSON file -c, --collection=<collection-name> collection to use when restoring from a BSON file --excludeCollection=<collection-name> DEPRECATED; collection to skip over during restore (may be specified multiple times to exclude additional collections) --excludeCollectionsWithPrefix=<collection-prefix> DEPRECATED; collections to skip over during restore that have the given prefix (may be specified multiple times to exclude additional prefixes) --nsExclude=<namespace-pattern> exclude matching namespaces --nsInclude=<namespace-pattern> include matching namespaces --nsFrom=<namespace-pattern> rename matching namespaces, must have matching nsTo --nsTo=<namespace-pattern> rename matched namespaces, must have matching nsFrom
input options: --objcheck validate all objects before inserting --oplogReplay replay oplog for point-in-time restore --oplogLimit=<seconds>[:ordinal] only include oplog entries before the provided Timestamp --oplogFile=<filename> oplog file to use for replay of oplog --archive=<filename> restore dump from the specified archive file. If flag is specified without a value, archive is read from stdin --restoreDbUsersAndRoles restore user and role definitions for the given database --dir=<directory-name> input directory, use '-' for stdin --gzip decompress gzipped input
restore options: --drop drop each collection before import --dryRun view summary without importing anything. recommended with verbosity --writeConcern=<write-concern> write concern options e.g. --writeConcern majority, --writeConcern '{w: 3, wtimeout: 500, fsync: true, j: true}' (defaults to 'majority') (default: majority) --noIndexRestore don't restore indexes --noOptionsRestore don't restore collection options --keepIndexVersion don't update index version --maintainInsertionOrder preserve order of documents during restoration -j, --numParallelCollections= number of collections to restore in parallel (4 by default) (default: 4) --numInsertionWorkersPerCollection= number of insert operations to run concurrently per collection (1 by default) (default: 1) --stopOnError stop restoring if an error is encountered on insert (off by default) --bypassDocumentValidation bypass document validation