Make sure your replace the <RABBITMQ_PASSWORD> by a strong password that you generate.
Step 3: Install Redis as a storage backend
Celery needs a storage backend to store results. secator uses the storage backend to print results in real-time.
sudo apt install redis-server
sudo vi /etc/redis/redis.conf
# set requirepass to <REDIS_PASSWORD>
# comment the "bind 127.0.0.1 ::1" line
# change "protected-mode" to "no"
sudo /etc/init.d/redis-server restart
Make sure your replace the <REDIS_PASSWORD> by a strong password that you generate.
Step 4: Deploy a secator worker
First, setup secatorusing the all-in-one bash setup script:
wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
Then, set the RabbitMQ and Redis connection details in secator's config:
secator config set celery.broker_url amqp://secator:<RABBITMQ_PASSWORD>@localhost:5672/
secator config set celery.result_backend redis://default:<REDIS_PASSWORD>@localhost:6379/0
Finally, run a secator worker:
nohup secator worker > worker.log 2>&1 & # start in background and save logs
Step 5: Run a task from your local machine
Let's configure the worker with RabbitMQ and Redis connection details:
secator config set celery.broker_url amqp://secator:<RABBITMQ_PASSWORD>@<EC2_PUBLIC_IP>:5672/
secator config set celery.result_backend redis://default:<REDIS_PASSWORD>@<EC2_PUBLIC_IP>:6379/0
If you want a more scaleable architecture, we recommend deploying RabbitMQ, Redis, and Celery workers on different EC2 instances.
The steps are exactly the same as for Deploy on a single EC2 instance, expect that steps 2, 3, and 4 will each be run on separate EC2 instance.
You can repeat step 4 on more instances to increase the number of workers.
Google Cloud Platform (GCP)
The Google Cloud Platform (GCP) is a suite of cloud services that offers server space on virtual machines, internal networks, VPN connections, disk storage, ...
Google Compute Engine
Deploy on a single GCE instance
To deploy secator on a single GCE machine, we will install RabbitMQ, Redis and a Celery worker running secator on the instance.
Step 1: Create a GCE instance
Go to the Google Cloud Console
Create a GCE instance using the Debian image
Create firewall rules in Network > Firewall:
Allow port 6379 (Redis)
Allow port 5672 (RabbitMQ)
SSH to your created instance
Step 2: Install RabbitMQ as a task broker
Celery needs a task broker to send tasks to remote workers.
Make sure your replace the <RABBITMQ_PASSWORD> by a strong password that you generate.
Step 3: Install Redis as a storage backend
Celery needs a storage backend to store results. secator uses the storage backend to print results in real-time.
sudo apt install redis-server
sudo vi /etc/redis/redis.conf
# set requirepass to <REDIS_PASSWORD>
# comment the "bind 127.0.0.1 ::1" line
# change "protected-mode" to "no"
sudo /etc/init.d/redis-server restart
Make sure your replace the <REDIS_PASSWORD> by a strong password that you generate.
Step 4: Deploy a secator worker
First, setup secatorusing the all-in-one bash setup script:
wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
Then, set the RabbitMQ and Redis connection details in secator's config:
secator config set celery.broker_url amqp://secator:<RABBITMQ_PASSWORD>@localhost:5672/
secator config set celery.result_backend redis://default:<REDIS_PASSWORD>@localhost:6379/0
Finally, run a secator worker:
nohup secator worker > worker.log 2>&1 & # start in background and save logs
Step 5: Run a task from your local machine
First, set the RabbitMQ and Redis connection details in secator's config:
secator config set celery.broker_url amqp://secator:<RABBITMQ_PASSWORD>@<GCE_PUBLIC_IP>:5672/
secator config set celery.result_backend redis://default:<REDIS_PASSWORD>@<GCE_PUBLIC_IP>:6379/0
If you want a more scaleable architecture, we recommend deploying RabbitMQ, Redis, and Celery workers on different machines.
The steps are exactly the same as for Deploy on multiple GCE instances, expect that steps 2, 3, and 4 will each be run on separate GCE instance.
You can repeat step 4 on more instances to increase the number of workers.
Google Kubernetes Engine [WIP]
Cloud Run [WIP]
Axiom [WIP]
Axiom is a dynamic infrastructure framework to efficiently work with multi-cloud environments, build and deploy repeatable infrastructure focussed on offensive and defensive security.