Scroll to navigation

SKIPFISH(1) General Commands Manual SKIPFISH(1)

NAME

skipfish - active web application security reconnaissance tool

SYNOPSIS

skipfish [options] -W wordlist -o output-directory start-url [start-url2 ...]
 

DESCRIPTION

skipfish is an active web application security reconnaissance tool. It prepares an interactive sitemap for the targeted site by carrying out a recursive crawl and dictionary-based probes. The resulting map is then annotated with the output from a number of active (but hopefully non-disruptive) security checks. The final report generated by the tool is meant to serve as a foundation for professional web application security assessments.

OPTIONS

Authentication and access options:

-A user:pass
use specified HTTP authentication credentials
-F host=IP
pretend that 'host' resolves to 'IP'
-C name=val
append a custom cookie to all requests
-H name=val
append a custom HTTP header to all requests
-b (i|f|p)
use headers consistent with MSIE / Firefox / iPhone
-N
do not accept any new cookies
 

Crawl scope options:

-d max_depth
maximum crawl tree depth (default: 16)
-c max_child
maximum children to index per node (default: 512)
-x max_desc
maximum descendants to index per crawl tree branch (default: 8192)
-r r_limit
max total number of requests to send (default: 100000000)
-p crawl%
node and link crawl probability (default: 100%)
-q hex
repeat a scan with a particular random seed
-I string
only follow URLs matching 'string'
-X string
exclude URLs matching 'string'
-K string
do not fuzz query parameters or form fields named 'string'
-Z
do not descend into directories that return HTTP 500 code
-D domain
also crawl cross-site links to a specified domain
-B domain
trust, but do not crawl, content included from a third-party domain
-O
do not submit any forms
-P
do not parse HTML and other documents to find new links
 

Reporting options:

-o dir
write output to specified directory (required)
-M
log warnings about mixed content or non-SSL password forms
-E
log all HTTP/1.0 / HTTP/1.1 caching intent mismatches
-U
log all external URLs and e-mails seen
-Q
completely suppress duplicate nodes in reports
-u
be quiet, do not display realtime scan statistics
 

Dictionary management options:

-S wordlist
load a specified read-only wordlist for brute-force tests
-W wordlist
load a specified read-write wordlist for any site-specific learned words. This option is required but the specified file can be empty, to store the newly learned words and alternatively, you can use -W- to discard new words.
-L
do not auto-learn new keywords for the site
-Y
do not fuzz extensions during most directory brute-force steps
-R age
purge words that resulted in a hit more than 'age' scans ago
-T name=val
add new form auto-fill rule
-G max_guess
maximum number of keyword guesses to keep in the jar (default: 256)
 

Performance settings:

-l max_req
max requests per second (0 = unlimited)
-g max_conn
maximum simultaneous TCP connections, global (default: 50)
-m host_conn
maximum simultaneous connections, per target IP (default: 10)
-f max_fail
maximum number of consecutive HTTP errors to accept (default: 100)
-t req_tmout
total request response timeout (default: 20 s)
-w rw_tmout
individual network I/O timeout (default: 10 s)
-i idle_tmout
timeout on idle HTTP connections (default: 10 s)
-s s_limit
response size limit (default: 200000 B)
-e
do not keep binary responses for reporting
 

Performance settings:

-k duration
stop scanning after the given duration (format: h:m:s)
 

AUTHOR

skipfish was written by Michal Zalewski <lcamtuf@google.com>, with contributions from Niels Heinen <heinenn@google.com>, Sebastian Roschke <s.roschke@googlemail.com>, and other parties.
This manual page was written by Thorsten Schifferdecker <tsd@debian.systs.org>, for the Debian project (and may be used by others).
March 23, 2010