Distributed runs with Celery

... or how you can 10x your scanning speed and massively parallelize your workflows.


Prerequisite

To use distributed runs, make sure the worker addon is installed:

secator install addons worker

Step 1: Configure a broker [optional]

This step is optional. If you do not configure a broker, the file system will be used as a broker and result backend. Note that this works only if the client and worker run on the same VM.

You can set up a task queue using Celery with the broker and a results backend of your choice, and run Celery workers to execute tasks from the broker queue.

The following is an example using redis, but you can use any supported Celery broker and backend.

Install redis addon:

secator install addons redis

Install redis:

sudo apt install redis

Start redis and enable at boot:

sudo systemctl enable redis
sudo systemctl start redis

Configure secator to use Redis:

secator config set celery.broker_url redis://<REDIS_IP>:6379/0
secator config set celery.result_backend redis://<REDIS_IP>:6379/0

Make sure you replace <REDIS_IP> in the variables above with the IP of your Redis server.


Step 2: Start a Celery worker

secator worker

Step 3: Run a task, workflow or scan

secator w host_scan wikipedia.org

If you want to run synchronously (bypassing the broker), you can use the --sync flag (CLI) or the sync kwarg (Python).

Last updated