table of contents
other versions
datalad-crawl(1) | General Commands Manual | datalad-crawl(1) |
SYNOPSIS¶
datalad-crawl [-h] [--is-pipeline] [-t] [-r] [-C CHDIR] [file]
DESCRIPTION¶
Crawl online resource to create or update a dataset.
Examples:
$ datalad crawl # within a dataset having .datalad/crawl/crawl.cfg
OPTIONS¶
- file
- configuration (or pipeline if --is-pipeline) file defining crawling, or a directory of a dataset on which to perform crawling using its standard crawling specification. Constraints: value must be a string [Default: None]
- -h, --help, --help-np
- show this help message. --help-np forcefully disables the use of a pager for displaying the help message
- --is-pipeline
- flag if provided file is a Python script which defines pipeline(). [Default: False]
- -t, --is-template
- flag if provided value is the name of the template to use. [Default: False]
- -r, --recursive
- flag to crawl subdatasets as well (for now serially). [Default: False]
- -C CHDIR, --chdir CHDIR
- directory to chdir to for crawling. Constraints: value must be a string [Default: None]
AUTHORS¶
datalad is developed by The DataLad Team and Contributors <team@datalad.org>.
2018-03-16 | datalad-crawl 0.9.3 |