Secator docs
  • GETTING STARTED
    • Introduction
    • Installation
    • CLI Usage
    • Library usage
    • Configuration
    • Examples
      • 5 minutes secator session
  • RUNNER OPTIONS
    • Global options
    • Meta options
    • Input formats
    • Output options
  • IN-DEPTH
    • Philosophy & design
    • Distributed runs with Celery
    • Concepts
      • Output types
      • Proxies
      • Exporters
      • Runners
      • Drivers
      • Profiles
    • Deployment
  • For developers
    • Development setup
    • Writing tasks
      • Integrating an external command
        • Parsing JSON lines
        • Parsing raw standard output
        • Parsing output files
        • Example: integrating ls
        • Example: cat hunters
      • Integrate custom Python code [WIP]
      • Advanced options
    • Writing workflows
    • Writing scans [WIP]
Powered by GitBook
On this page
  • Amazon Web Services (AWS)
  • Elastic Compute Cloud
  • Google Cloud Platform (GCP)
  • Google Compute Engine
  • Google Kubernetes Engine [WIP]
  • Cloud Run [WIP]
  • Axiom [WIP]
  • Bare metal
  • Kubernetes
  • Docker-compose [WIP]

Was this helpful?

  1. IN-DEPTH

Deployment

... or how to run secator anywhere.

PreviousProfilesNextDevelopment setup

Last updated 16 days ago

Was this helpful?

Once you have started using secator locally, you can deploy it on remote instances.


Amazon Web Services (AWS)

Amazon Web Services (AWS) is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies.

Elastic Compute Cloud

Deploy on a single EC2 instance

To deploy secator on AWS on a single EC2 instance, we will install RabbitMQ, Redis and a Celery worker running secator on the instance.

Step 1: Create an EC2 instance with AMI Ubuntu
  • Go to the AWS Management Console

  • Create an EC2 instance using the Ubuntu AMI

  • Configure Security Groups:

    • Allow port 6379 (Redis)

    • Allow port 5672 (RabitMQ)

  • SSH to your created instance

Step 2: Install RabbitMQ as a task broker

Celery needs a task broker to send tasks to remote workers.

sudo apt-get install curl gnupg apt-transport-https -y
curl -1sLf "https://keys.openpgp.org/vks/v1/by-fingerprint/0A9AF2115F4687BD29803A206B73A36E6026DFCA" | sudo gpg --dearmor | sudo tee /usr/share/keyrings/com.rabbitmq.team.gpg > /dev/null
curl -1sLf "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xf77f1eda57ebb1cc" | sudo gpg --dearmor | sudo tee /usr/share/keyrings/net.launchpad.ppa.rabbitmq.erlang.gpg > /dev/null
curl -1sLf "https://packagecloud.io/rabbitmq/rabbitmq-server/gpgkey" | sudo gpg --dearmor | sudo tee /usr/share/keyrings/io.packagecloud.rabbitmq.gpg > /dev/null
sudo apt-get update -y
sudo apt-get install -y erlang-base \
    erlang-asn1 erlang-crypto erlang-eldap erlang-ftp erlang-inets \
    erlang-mnesia erlang-os-mon erlang-parsetools erlang-public-key \
    erlang-runtime-tools erlang-snmp erlang-ssl \
    erlang-syntax-tools erlang-tftp erlang-tools erlang-xmerl
sudo apt-get install rabbitmq-server -y --fix-missing
sudo rabbitmq-plugins enable rabbitmq_management
sudo rabbitmqctl add_user secator <RABBITMQ_PASSWORD>
sudo rabbitmqctl set_user_tags secator administrator
sudo rabbitmqctl set_permissions -p / secator ".*" ".*" ".*"

Make sure your replace the <RABBITMQ_PASSWORD> by a strong password that you generate.

Step 3: Install Redis as a storage backend

Celery needs a storage backend to store results. secator uses the storage backend to print results in real-time.

sudo apt install redis-server
sudo vi /etc/redis/redis.conf
# set requirepass to <REDIS_PASSWORD>
# comment the "bind 127.0.0.1 ::1" line
# change "protected-mode" to "no"

sudo /etc/init.d/redis-server restart

Make sure your replace the <REDIS_PASSWORD> by a strong password that you generate.

Step 4: Deploy a secator worker

First, setup secatorusing the all-in-one bash setup script:

wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh

Then, set the RabbitMQ and Redis connection details in secator's config:

secator config set celery.broker_url amqp://secator:<RABBITMQ_PASSWORD>@localhost:5672/
secator config set celery.result_backend redis://default:<REDIS_PASSWORD>@localhost:6379/0

Finally, run a secator worker:

nohup secator worker > worker.log 2>&1 &  # start in background and save logs
Step 5: Run a task from your local machine

Let's configure the worker with RabbitMQ and Redis connection details:

secator config set celery.broker_url amqp://secator:<RABBITMQ_PASSWORD>@<EC2_PUBLIC_IP>:5672/
secator config set celery.result_backend redis://default:<REDIS_PASSWORD>@<EC2_PUBLIC_IP>:6379/0

Run a test task:

secator x httpx wikipedia.org

You should get an output like the following:

                         __            
   ________  _________ _/ /_____  _____
  / ___/ _ \/ ___/ __ `/ __/ __ \/ ___/
 (__  /  __/ /__/ /_/ / /_/ /_/ / /    
/____/\___/\___/\__,_/\__/\____/_/     v0.0.1

                    freelabz.com

Celery worker is alive !
╭──────── Task httpx ─────────╮
│ 📜 Description: DotMap()    │
│ 👷 Workspace: default       │
│ 🍐 Targets:                 │
│    • wikipedia.org          │
│ 📌 Options:                 │
│    • follow_redirect: False │
│    • threads: 50            │
│    • debug_resp: False      │
╰─────────────────────────────╯
[10:20:54] 🎉 Task httpx sent to Celery worker...                                                                                                                                        _base.py:614
🏆 Live results:
🔗 https://wikipedia.org [301] [301 Moved Permanently] [mw1415.eqiad.wmnet] [HSTS] [text/html] [234]

Deploy on multiple EC2 instances

If you want a more scaleable architecture, we recommend deploying RabbitMQ, Redis, and Celery workers on different EC2 instances.

The steps are exactly the same as for Deploy on a single EC2 instance, expect that steps 2, 3, and 4 will each be run on separate EC2 instance.

You can repeat step 4 on more instances to increase the number of workers.


Google Cloud Platform (GCP)

The Google Cloud Platform (GCP) is a suite of cloud services that offers server space on virtual machines, internal networks, VPN connections, disk storage, ...

Google Compute Engine

Deploy on a single GCE instance

To deploy secator on a single GCE machine, we will install RabbitMQ, Redis and a Celery worker running secator on the instance.

Step 1: Create a GCE instance
  • Go to the Google Cloud Console

  • Create a GCE instance using the Debian image

  • Create firewall rules in Network > Firewall:

    • Allow port 6379 (Redis)

    • Allow port 5672 (RabbitMQ)

  • SSH to your created instance

Step 2: Install RabbitMQ as a task broker

Celery needs a task broker to send tasks to remote workers.

sudo apt-get install curl gnupg apt-transport-https -y
curl -1sLf "https://keys.openpgp.org/vks/v1/by-fingerprint/0A9AF2115F4687BD29803A206B73A36E6026DFCA" | sudo gpg --dearmor | sudo tee /usr/share/keyrings/com.rabbitmq.team.gpg > /dev/null
curl -1sLf "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xf77f1eda57ebb1cc" | sudo gpg --dearmor | sudo tee /usr/share/keyrings/net.launchpad.ppa.rabbitmq.erlang.gpg > /dev/null
curl -1sLf "https://packagecloud.io/rabbitmq/rabbitmq-server/gpgkey" | sudo gpg --dearmor | sudo tee /usr/share/keyrings/io.packagecloud.rabbitmq.gpg > /dev/null
sudo apt-get update -y
sudo apt-get install -y erlang-base \
    erlang-asn1 erlang-crypto erlang-eldap erlang-ftp erlang-inets \
    erlang-mnesia erlang-os-mon erlang-parsetools erlang-public-key \
    erlang-runtime-tools erlang-snmp erlang-ssl \
    erlang-syntax-tools erlang-tftp erlang-tools erlang-xmerl
sudo apt-get install rabbitmq-server -y --fix-missing
sudo rabbitmq-plugins enable rabbitmq_management
sudo rabbitmqctl add_user secator <RABBITMQ_PASSWORD>
sudo rabbitmqctl set_user_tags secator administrator
sudo rabbitmqctl set_permissions -p / secator ".*" ".*" ".*"

Make sure your replace the <RABBITMQ_PASSWORD> by a strong password that you generate.

Step 3: Install Redis as a storage backend

Celery needs a storage backend to store results. secator uses the storage backend to print results in real-time.

sudo apt install redis-server
sudo vi /etc/redis/redis.conf
# set requirepass to <REDIS_PASSWORD>
# comment the "bind 127.0.0.1 ::1" line
# change "protected-mode" to "no"

sudo /etc/init.d/redis-server restart

Make sure your replace the <REDIS_PASSWORD> by a strong password that you generate.

Step 4: Deploy a secator worker

First, setup secatorusing the all-in-one bash setup script:

wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh

Then, set the RabbitMQ and Redis connection details in secator's config:

secator config set celery.broker_url amqp://secator:<RABBITMQ_PASSWORD>@localhost:5672/
secator config set celery.result_backend redis://default:<REDIS_PASSWORD>@localhost:6379/0

Finally, run a secator worker:

nohup secator worker > worker.log 2>&1 &  # start in background and save logs
Step 5: Run a task from your local machine

First, set the RabbitMQ and Redis connection details in secator's config:

secator config set celery.broker_url amqp://secator:<RABBITMQ_PASSWORD>@<GCE_PUBLIC_IP>:5672/
secator config set celery.result_backend redis://default:<REDIS_PASSWORD>@<GCE_PUBLIC_IP>:6379/0

Run a test task:

secator x httpx wikipedia.org

You should get an output like the following:

                         __            
   ________  _________ _/ /_____  _____
  / ___/ _ \/ ___/ __ `/ __/ __ \/ ___/
 (__  /  __/ /__/ /_/ / /_/ /_/ / /    
/____/\___/\___/\__,_/\__/\____/_/     v0.0.1

                    freelabz.com

Celery worker is alive !
╭──────── Task httpx ─────────╮
│ 📜 Description: DotMap()    │
│ 👷 Workspace: default       │
│ 🍐 Targets:                 │
│    • wikipedia.org          │
│ 📌 Options:                 │
│    • follow_redirect: False │
│    • threads: 50            │
│    • debug_resp: False      │
╰─────────────────────────────╯
[10:20:54] 🎉 Task httpx sent to Celery worker...                                                                                                                                        _base.py:614
🏆 Live results:
🔗 https://wikipedia.org [301] [301 Moved Permanently] [mw1415.eqiad.wmnet] [HSTS] [text/html] [234]

Deploy on multiple GCE instances

If you want a more scaleable architecture, we recommend deploying RabbitMQ, Redis, and Celery workers on different machines.

The steps are exactly the same as for the previous section, except that steps 2, 3, and 4 will each be run on separate GCE instance.

You can repeat step 4 on more instances to increase the number of workers.

Google Kubernetes Engine [WIP]

Cloud Run [WIP]


Axiom [WIP]

Axiom is a dynamic infrastructure framework to efficiently work with multi-cloud environments, build and deploy repeatable infrastructure focused on offensive and defensive security.


Bare metal

Kubernetes

Docker-compose [WIP]


A Helm chart is available in the repository: .

https://github.com/freelabz/secator/tree/main/helm
Deployment on a single EC2 instance
Deployment on a single EC2 instance