URLEXTRACTOR(1) | Usage | URLEXTRACTOR(1) |
NAME¶
URLEXTRACTOR - Information gathering and website reconnaissance
SYNOPSIS¶
urlextractor [URL]
DESCRIPTION¶
urlextractor gathers information from the specified URL and prints
it to STDOUT
gathering the following information:
- IP and hosting info like city and country (using FreegeoIP)
- DNS servers (using dig)
- ASN, Network range, ISP name (using RISwhois)
- Load balancer test
- Whois for abuse mail (using Spamcop)
- PAC (Proxy Auto Configuration) file
- Compares hashes to diff code
- robots.txt (recursively looking for hidden stuff)
- Source code (looking for passwords and users)
- External links (frames from other websites)
- Directory FUZZ (like Dirbuster and Wfuzz - using Dirbuster) directory list)
- URLvoid API - checks Google page rank, Alexa rank and possible blacklists
- Provides useful links at other websites to correlate with IP/ASN
- Option to open ALL results in browser at the end
FILES¶
urlextractor at runtime wil check if the directory
$HOME/.urlextractor
exists if the directory does not exists the directory will be created.
The previous behaviour has been added in Debian Systems in order to have a
better
user experience
- $HOME/.urlextractor/config
-
The configuration file used to customize default program settings.
After the directory $HOME/.urlextractor is created a default configuration file is
copied from the package examples directory /usr/share/doc/urlextractor/examples/config
containing a default configuration to enable urlextractor to work.
For more information about the configuration check the example file. - $HOME/.urlextractor/log.csv
-
Save the scanned sites for future reference.
AUTHOR¶
Eduardo Schultze <eduardo.schultze@gmail.com> (2016).
NOTES¶
This manual page has been written by Josue Ortega <josue@debian.org> for the Debian project (and may be used by others).
LICENSE¶
The MIT License (MIT)
February 27, 2021 | Version 0.2.0 |