PARAMSPIDER(1) | paramspider | PARAMSPIDER(1) |
NAME¶
paramspider - Mining parameters from the dark corners of Web Archives
SYNOPSIS¶
paramspider [-h] [-d DOMAIN] [-l LIST] [-s] [--proxy PROXY] [-p PLACEHOLDER]
DESCRIPTION¶
paramspider allows you to fetch URLs related to any domain or a list of domains from Wayback Archives. It filters out "boring" URLs, allowing you to focus on the ones that matter the most.
OPTIONS¶
- -h, --help:
- Display command usage and options.
- -d DOMAIN, --domain DOMAIN:
- Domain name to fetch related URLs for.
- -l LIST, --list LIST:
- File containing a list of domain names.
- -s, --stream:
- Stream URLs on the terminal.
- --proxy PROXY
- Set the proxy address for web requests.
- -p PLACEHOLDER, --placeholder PLACEHOLDER
- Placeholder for parameter values.
EXAMPLES¶
Common usage:
Discover URLs for a single domain:
$ paramspider -d example.com
Discover URLs for multiple domains from a file:
$ paramspider -l domains.txt
Stream URLs on the terminal for a domain:
$ paramspider -d example.com -s
Set up web request proxy:
$ paramspider -d example.com --proxy '127.0.0.1:7890'
Adding a placeholder for URL parameter values (default: "FUZZ"):
$ paramspider -d example.com -p '"><h1>reflection</h1>'
AUTHOR¶
Aquila Macedo <aquilamacedo@riseup.net>
COPYRIGHT¶
Expat
2024-03-27 | 1.0.1 |