Deployment
... or how to run secator anywhere.
Once you have started using secator locally, you can deploy it on remote instances.
Amazon Web Services (AWS)
Amazon Web Services (AWS) is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies.
Elastic Compute Cloud
Deploy on a single EC2 instance

To deploy secator on AWS on a single EC2 instance, we will install RabbitMQ, Redis and a Celery worker running secator on the instance.
Step 1: Create an EC2 instance with AMI Ubuntu
Go to the AWS Management Console
Create an EC2 instance using the Ubuntu AMI
Configure Security Groups:
Allow port 6379 (Redis)
Allow port 5672 (RabitMQ)
SSH to your created instance
Step 2: Install RabbitMQ as a task broker
Celery needs a task broker to send tasks to remote workers.
Make sure your replace the <RABBITMQ_PASSWORD> by a strong password that you generate.
Step 3: Install Redis as a storage backend
Celery needs a storage backend to store results. secator uses the storage backend to print results in real-time.
Make sure your replace the <REDIS_PASSWORD> by a strong password that you generate.
Step 4: Deploy a secator worker
First, setup secatorusing the all-in-one bash setup script:
Then, set the RabbitMQ and Redis connection details in secator's config:
Finally, run a secator worker:
Step 5: Run a task from your local machine
Let's configure the worker with RabbitMQ and Redis connection details:
Run a test task:
You should get an output like the following:
Deploy on multiple EC2 instances
If you want a more scaleable architecture, we recommend deploying RabbitMQ, Redis, and Celery workers on different EC2 instances.
The steps are exactly the same as for Deploy on a single EC2 instance, expect that steps 2, 3, and 4 will each be run on separate EC2 instance.
You can repeat step 4 on more instances to increase the number of workers.
Google Cloud Platform (GCP)
The Google Cloud Platform (GCP) is a suite of cloud services that offers server space on virtual machines, internal networks, VPN connections, disk storage, ...
Google Compute Engine
Deploy on a single GCE instance

To deploy secator on a single GCE machine, we will install RabbitMQ, Redis and a Celery worker running secator on the instance.
Step 1: Create a GCE instance
Go to the Google Cloud Console
Create a GCE instance using the Debian image
Create firewall rules in Network > Firewall:
Allow port 6379 (Redis)
Allow port 5672 (RabbitMQ)
SSH to your created instance
Step 2: Install RabbitMQ as a task broker
Celery needs a task broker to send tasks to remote workers.
Make sure your replace the <RABBITMQ_PASSWORD> by a strong password that you generate.
Step 3: Install Redis as a storage backend
Celery needs a storage backend to store results. secator uses the storage backend to print results in real-time.
Make sure your replace the <REDIS_PASSWORD> by a strong password that you generate.
Step 4: Deploy a secator worker
First, setup secatorusing the all-in-one bash setup script:
Then, set the RabbitMQ and Redis connection details in secator's config:
Finally, run a secator worker:
Step 5: Run a task from your local machine
First, set the RabbitMQ and Redis connection details in secator's config:
Run a test task:
You should get an output like the following:
Deploy on multiple GCE instances
If you want a more scaleable architecture, we recommend deploying RabbitMQ, Redis, and Celery workers on different machines.
The steps are exactly the same as for the previous section, except that steps 2, 3, and 4 will each be run on separate GCE instance.
You can repeat step 4 on more instances to increase the number of workers.
Google Kubernetes Engine [WIP]
Cloud Run [WIP]
Axiom [WIP]
Axiom is a dynamic infrastructure framework to efficiently work with multi-cloud environments, build and deploy repeatable infrastructure focused on offensive and defensive security.
Bare metal
Kubernetes
A Helm chart is available in the repository: https://github.com/freelabz/secator/tree/main/helm.
Docker-compose [WIP]
Last updated