Scroll to navigation

datalad(1) datalad datalad(1)

NAME

datalad - comprehensive data management solution

SYNOPSIS

datalad [-l LEVEL] [--pbs-runner {condor}] [-C PATH] [--version] [--dbg] [--idbg] [-c KEY=VALUE] [-f {default,json,json_pp,tailored,'<template>'}] [--report-status {success,failure,ok,notneeded,impossible,error}] [--report-type {dataset,file}] [--on-failure {ignore,continue,stop}] [--proc-pre <PROCEDURE NAME> [ARGS ...]] [--proc-post <PROCEDURE NAME> [ARGS ...]] [--cmd] [-h] COMMAND ...

DESCRIPTION

DataLad provides a unified data distribution system built on the Git and Git-annex. DataLad command line tools allow to manipulate (obtain, create, update, publish, etc.) datasets and provide a comprehensive toolbox for joint management of data and code. Compared to Git/annex it primarly extends their functionality to transparently and simultaneously work with multiple inter-related repositories.

OPTIONS

{create, install, get, add, publish, uninstall, drop, remove, update, create-sibling, create-sibling-github, unlock, save, search, metadata, aggregate-metadata, extract-metadata, wtf, test, ls, clean, add-archive-content, download-url, run, rerun, run-procedure, no-annex, addurls, check-dates, add-readme, export-archive, export-to-figshare, annotate-paths, clone, create-test-dataset, diff, siblings, sshrun, subdatasets}

-l LEVEL, --log-level LEVEL
set logging verbosity level. Choose among critical, error, warning, info, debug. Also you can specify an integer <10 to provide even more debugging information
--pbs-runner {condor}
execute command by scheduling it via available PBS. For settings, config file will be consulted
-C PATH
run as if datalad was started in <path> instead of the current working directory. When multiple -C options are given, each subsequent non-absolute -C <path> is interpreted relative to the preceding -C <path>. This option affects the interpretations of the path names in that they are made relative to the working directory caused by the -C option
--version
show the program's version
--dbg
enter Python debugger when uncaught exception happens
--idbg
enter IPython debugger when uncaught exception happens
-c KEY=VALUE
configuration variable setting. Overrides any configuration read from a file, but is potentially overridden itself by configuration variables in the process environment.
-f {default, json, json_pp, tailored,'<template>'}, --output-format {default, json, json_pp, tailored,'<template>'}
select format for returned command results. 'default' give one line per result reporting action, status, path and an optional message; 'json' renders a JSON object with all properties for each result (one per line); 'json_pp' pretty-prints JSON spanning multiple lines; 'tailored' enables a command-specific rendering style that is typically tailored to human consumption (no result output otherwise), '<template>' reports any value(s) of any result properties in any format indicated by the template (e.g. '{path}'; compare with JSON output for all key-value choices). The template syntax follows the Python "format() language". It is possible to report individual dictionary values, e.g. '{metadata[name]}'. If a 2nd-level key contains a colon, e.g. 'music:Genre', ':' must be substituted by '#' in the template, like so: '{metadata[music#Genre]}'.
--report-status {success, failure, ok, notneeded, impossible, error}
constrain command result report to records matching the given status. 'success' is a synonym for 'ok' OR 'notneeded', 'failure' stands for 'impossible' OR 'error'.
--report-type {dataset, file}
constrain command result report to records matching the given type. Can be given more than once to match multiple types.
--on-failure {ignore, continue, stop}
when an operation fails: 'ignore' and continue with remaining operations, the error is logged but does not lead to a non-zero exit code of the command; 'continue' works like 'ignore', but an error causes a non-zero exit code; 'stop' halts on first failure and yields non-zero exit code. A failure is any result with status 'impossible' or 'error'.
--proc-pre <PROCEDURE NAME> [ARGS ...]
Dataset procedure to run before the main command (see run-procedure command for details). This option can be given more than once to run multiple procedures in the order in which they were given. It is important to specify the target dataset via the --dataset argument of the main command.
--proc-post <PROCEDURE NAME> [ARGS ...]
Like --proc-pre, but procedures are executed after the main command has finished.
--cmd
syntactical helper that can be used to end the list of global command line options before the subcommand label. Options like --proc-pre can take an arbitrary number of arguments and may require to be followed by a single --cmd in order to enable identification of the subcommand.
-h, --help, --help-np
show this help message. --help-np forcefully disables the use of a pager for displaying the help message

"Be happy!"

AUTHORS

datalad is developed by The DataLad Team and Contributors <team@datalad.org>.
2019-02-08