darc
- Darkweb Crawler Project¶
darc
is designed as a swiss army knife for darkweb crawling.
It integrates requests
to collect HTTP request and response
information, such as cookies, header fields, etc. It also bundles
selenium
to provide a fully rendered web page and screenshot
of such view.
There are two types of workers:
crawler
– runs thedarc.crawl.crawler()
to provide a fresh view of a link and test its connectabilityloader
– run thedarc.crawl.loader()
to provide an in-depth view of a link and provide more visual information
The general process can be described as following for workers of crawler
type:
process_crawler()
: obtain URLs from therequests
link database (c.f.load_requests()
), and feed such URLs tocrawler()
.crawler()
: parse the URL usingparse_link()
, and check if need to crawl the URL (c.f.PROXY_WHITE_LIST
,PROXY_BLACK_LIST
,LINK_WHITE_LIST
andLINK_BLACK_LIST
); if true, then crawl the URL withrequests
.If the URL is from a brand new host,
darc
will first try to fetch and saverobots.txt
and sitemaps of the host (c.f.save_robots()
andsave_sitemap()
), and extract then save the links from sitemaps (c.f.read_sitemap()
) into link database for future crawling (c.f.save_requests()
). Also, if the submission API is provided,submit_new_host()
will be called and submit the documents just fetched.If
robots.txt
presented, andFORCE
isFalse
,darc
will check if allowed to crawl the URL.Note
The root path (e.g.
/
in https://www.example.com/) will always be crawled ignoringrobots.txt
.At this point,
darc
will call the customised hook function fromdarc.sites
to crawl and get the final response object.darc
will save the session cookies and header information, usingsave_headers()
.Note
If
requests.exceptions.InvalidSchema
is raised, the link will be saved bysave_invalid()
. Further processing is dropped.If the content type of response document is not ignored (c.f.
MIME_WHITE_LIST
andMIME_BLACK_LIST
),submit_requests()
will be called and submit the document just fetched.If the response document is HTML (
text/html
andapplication/xhtml+xml
),extract_links()
will be called then to extract all possible links from the HTML document and save such links into the database (c.f.save_requests()
).And if the response status code is between
400
and600
, the URL will be saved back to the link database (c.f.save_requests()
). If NOT, the URL will be saved intoselenium
link database to proceed next steps (c.f.save_selenium()
).
The general process can be described as following for workers of loader
type:
process_loader()
: in the meanwhile,darc
will obtain URLs from theselenium
link database (c.f.load_selenium()
), and feed such URLs toloader()
.loader()
: parse the URL usingparse_link()
and start loading the URL usingselenium
with Google Chrome.At this point,
darc
will call the customised hook function fromdarc.sites
to load and return the originalWebDriver
object.If successful, the rendered source HTML document will be saved, and a full-page screenshot will be taken and saved.
If the submission API is provided,
submit_selenium()
will be called and submit the document just loaded.Later,
extract_links()
will be called then to extract all possible links from the HTML document and save such links into therequests
database (c.f.save_requests()
).
Installation¶
Note
darc
supports Python all versions above and includes 3.6.
Currently, it only supports and is tested on Linux (Ubuntu 18.04)
and macOS (Catalina).
When installing in Python versions below 3.8, darc
will
use walrus
to compile itself for backport compatibility.
pip install darc
Please make sure you have Google Chrome and corresponding version of Chrome Driver installed on your system.
Important
Starting from version 0.3.0, we introduced Redis for the task queue database backend.
Since version 0.6.0, we introduced relationship database storage (e.g. MySQL, SQLite, PostgreSQL, etc.) for the task queue database backend, besides the Redis database, since it can be too much memory-costly when the task queue becomes vary large.
Please make sure you have one of the backend database installed, configured,
and running when using the darc
project.
However, the darc
project is shipped with Docker and Compose support.
Please see Docker Integration for more information.
Or, you may refer to and/or install from the Docker Hub repository:
docker pull jsnbzh/darc[:TAGNAME]
Usage¶
The darc
project provides a simple CLI:
usage: darc [-h] [-f FILE] ...
the darkweb crawling swiss army knife
positional arguments:
link links to craw
optional arguments:
-h, --help show this help message and exit
-f FILE, --file FILE read links from file
It can also be called through module entrypoint:
python -m python-darc ...
Note
The link files can contain comment lines, which should start with #
.
Empty lines and comment lines will be ignored when loading.
Configuration¶
Though simple CLI, the darc
project is more configurable by
environment variables.
General Configurations¶
-
DARC_REBOOT
¶ -
If exit the program after first round, i.e. crawled all links from the
requests
link database and loaded all links from theselenium
link database.This can be useful especially when the capacity is limited and you wish to save some space before continuing next round. See Docker integration for more information.
-
DARC_VERBOSE
¶ -
If run the program in verbose mode. If
DARC_DEBUG
isTrue
, then the verbose mode will be always enabled.
-
DARC_CHECK
¶ -
If check proxy and hostname before crawling (when calling
extract_links()
,read_sitemap()
andread_hosts()
).If
DARC_CHECK_CONTENT_TYPE
isTrue
, then this environment variable will be always set asTrue
.
-
DARC_CHECK_CONTENT_TYPE
¶ -
If check content type through
HEAD
requests before crawling (when callingextract_links()
,read_sitemap()
andread_hosts()
).
-
DARC_CPU
¶ -
Number of concurrent processes. If not provided, then the number of system CPUs will be used.
Note
DARC_MULTIPROCESSING
and DARC_MULTITHREADING
can
NOT be toggled at the same time.
-
DARC_USER
¶ - Type
- Default
current login user (c.f.
getpass.getuser()
)
Non-root user for proxies.
Data Storage¶
See also
See darc.save
for more information about source saving.
See darc.db
for more information about database integration.
-
DB_URL
¶ - Type
str
(url)
URL to the RDS storage.
Important
The task queues will be saved to
darc
database; the data submittsion will be saved todarcweb
database.Thus, when providing this environment variable, please do NOT specify the database name.
-
DARC_BULK_SIZE
¶ - Type
- Default
100
Bulk size for updating databases.
See also
darc.db.save_requests()
darc.db.save_selenium()
-
LOCK_TIMEOUT
¶ - Type
float
- Default
10
Lock blocking timeout.
Note
If is an infinit
inf
, no timeout will be applied.See also
Get a lock from
darc.db.get_lock()
.
-
DARC_MAX_POOL
¶ - Type
- Default
1_000
Maximum number of links loaded from the database.
Note
If is an infinit
inf
, no limit will be applied.See also
darc.db.load_requests()
darc.db.load_selenium()
-
REDIS_LOCK
¶ -
If use Redis (Lua) lock to ensure process/thread-safely operations.
See also
Toggles the behaviour of
darc.db.get_lock()
.
Web Crawlers¶
-
DARC_WAIT
¶ - Type
float
- Default
60
Time interval between each round when the
requests
and/orselenium
database are empty.
-
DARC_SAVE
¶ -
If save processed link back to database.
Note
If
DARC_SAVE
isTrue
, thenDARC_SAVE_REQUESTS
andDARC_SAVE_SELENIUM
will be forced to beTrue
.See also
See
darc.db
for more information about link database.
-
DARC_SAVE_REQUESTS
¶ -
If save
crawler()
crawled link back torequests
database.See also
See
darc.db
for more information about link database.
-
DARC_SAVE_SELENIUM
¶ -
If save
loader()
crawled link back toselenium
database.See also
See
darc.db
for more information about link database.
-
TIME_CACHE
¶ - Type
float
- Default
60
Time delta for caches in seconds.
The
darc
project supports caching for fetched files.TIME_CACHE
will specify for how log the fetched files will be cached and NOT fetched again.Note
If
TIME_CACHE
isNone
then caching will be marked as forever.
-
SE_WAIT
¶ - Type
float
- Default
60
Time to wait for
selenium
to finish loading pages.Note
Internally,
selenium
will wait for the browser to finish loading the pages before return (i.e. the web API eventDOMContentLoaded
). However, some extra scripts may take more time running after the event.
White / Black Lists¶
-
LINK_WHITE_LIST
¶ - Type
List[str]
(JSON)- Default
[]
White list of hostnames should be crawled.
Note
Regular expressions are supported.
-
LINK_BLACK_LIST
¶ - Type
List[str]
(JSON)- Default
[]
Black list of hostnames should be crawled.
Note
Regular expressions are supported.
-
MIME_WHITE_LIST
¶ - Type
List[str]
(JSON)- Default
[]
White list of content types should be crawled.
Note
Regular expressions are supported.
-
MIME_BLACK_LIST
¶ - Type
List[str]
(JSON)- Default
[]
Black list of content types should be crawled.
Note
Regular expressions are supported.
-
PROXY_WHITE_LIST
¶ - Type
List[str]
(JSON)- Default
[]
White list of proxy types should be crawled.
Note
The proxy types are case insensitive.
-
PROXY_BLACK_LIST
¶ - Type
List[str]
(JSON)- Default
[]
Black list of proxy types should be crawled.
Note
The proxy types are case insensitive.
Note
If provided,
LINK_WHITE_LIST
, LINK_BLACK_LIST
,
MIME_WHITE_LIST
, MIME_BLACK_LIST
,
PROXY_WHITE_LIST
and PROXY_BLACK_LIST
should all be JSON encoded strings.
Data Submission¶
Note
If API_NEW_HOST
, API_REQUESTS
and API_SELENIUM
is None
, the corresponding
submit function will save the JSON data in the path
specified by PATH_DATA
.
Tor Proxy Configuration¶
-
TOR_PASS
¶ -
Tor controller authentication token.
Note
If not provided, it will be requested at runtime.
-
TOR_WAIT
¶ - Type
float
- Default
90
Time after which the attempt to start Tor is aborted.
Note
If not provided, there will be NO timeouts.
-
TOR_CFG
¶ - Type
Dict[str, Any]
(JSON)- Default
{}
Tor bootstrap configuration for
stem.process.launch_tor_with_config()
.Note
If provided, it should be a JSON encoded string.
I2P Proxy Configuration¶
-
I2P_WAIT
¶ - Type
float
- Default
90
Time after which the attempt to start I2P is aborted.
Note
If not provided, there will be NO timeouts.
-
I2P_ARGS
¶ - Type
str
(Shell)- Default
''
I2P bootstrap arguments for
i2prouter start
.If provided, it should be parsed as command line arguments (c.f.
shlex.split()
).Note
The command will be run as
DARC_USER
, if current user (c.f.getpass.getuser()
) is root.
ZeroNet Proxy Configuration¶
-
ZERONET_WAIT
¶ - Type
float
- Default
90
Time after which the attempt to start ZeroNet is aborted.
Note
If not provided, there will be NO timeouts.
-
ZERONET_ARGS
¶ - Type
str
(Shell)- Default
''
ZeroNet bootstrap arguments for
ZeroNet.sh main
.Note
If provided, it should be parsed as command line arguments (c.f.
shlex.split()
).
Freenet Proxy Configuration¶
-
FREENET_WAIT
¶ - Type
float
- Default
90
Time after which the attempt to start Freenet is aborted.
Note
If not provided, there will be NO timeouts.
-
FREENET_ARGS
¶ - Type
str
(Shell)- Default
''
Freenet bootstrap arguments for
run.sh start
.If provided, it should be parsed as command line arguments (c.f.
shlex.split()
).Note
The command will be run as
DARC_USER
, if current user (c.f.getpass.getuser()
) is root.