Make sure your replace the <RABBITMQ_PASSWORD> by a strong password that you generate.
Step 3: Install Redis as a storage backend
Celery needs a storage backend to store results. secator uses the storage backend to print results in real-time.
sudoaptinstallredis-serversudovi/etc/redis/redis.conf# set requirepass to <REDIS_PASSWORD># comment the "bind 127.0.0.1 ::1" line# change "protected-mode" to "no"sudo/etc/init.d/redis-serverrestart
Make sure your replace the <REDIS_PASSWORD> by a strong password that you generate.
Step 4: Deploy a secator worker
First, setup secatorusing the all-in-one bash setup script:
wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
Then, set the RabbitMQ and Redis connection details in secator's config:
____________________//__________/___/_ \/ ___/__`/__/__ \/ ___/ (__/__//__//_///_//_////____/\___/\___/\__,_/\__/\____/_/v0.0.1freelabz.comCeleryworkerisalive!╭────────Taskhttpx─────────╮│📜Description:DotMap() ││👷Workspace:default││🍐Targets:││•wikipedia.org││📌Options:││•follow_redirect:False││•threads:50││•debug_resp:False│╰─────────────────────────────╯[10:20:54] 🎉 Task httpx sent to Celery worker... _base.py:614
🏆Liveresults:🔗https://wikipedia.org [301] [301 Moved Permanently] [mw1415.eqiad.wmnet] [HSTS] [text/html] [234]
Deploy on multiple EC2 instances
If you want a more scaleable architecture, we recommend deploying RabbitMQ, Redis, and Celery workers on different EC2 instances.
The steps are exactly the same as for Deploy on a single EC2 instance, expect that steps 2, 3, and 4 will each be run on separate EC2 instance.
You can repeat step 4 on more instances to increase the number of workers.
Google Cloud Platform (GCP)
The Google Cloud Platform (GCP) is a suite of cloud services that offers server space on virtual machines, internal networks, VPN connections, disk storage, ...
Google Compute Engine
Deploy on a single GCE instance
To deploy secator on a single GCE machine, we will install RabbitMQ, Redis and a Celery worker running secator on the instance.
Step 1: Create a GCE instance
Go to the Google Cloud Console
Create a GCE instance using the Debian image
Create firewall rules in Network > Firewall:
Allow port 6379 (Redis)
Allow port 5672 (RabbitMQ)
SSH to your created instance
Step 2: Install RabbitMQ as a task broker
Celery needs a task broker to send tasks to remote workers.
Make sure your replace the <RABBITMQ_PASSWORD> by a strong password that you generate.
Step 3: Install Redis as a storage backend
Celery needs a storage backend to store results. secator uses the storage backend to print results in real-time.
sudoaptinstallredis-serversudovi/etc/redis/redis.conf# set requirepass to <REDIS_PASSWORD># comment the "bind 127.0.0.1 ::1" line# change "protected-mode" to "no"sudo/etc/init.d/redis-serverrestart
Make sure your replace the <REDIS_PASSWORD> by a strong password that you generate.
Step 4: Deploy a secator worker
First, setup secatorusing the all-in-one bash setup script:
wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
Then, set the RabbitMQ and Redis connection details in secator's config:
____________________//__________/___/_ \/ ___/__`/__/__ \/ ___/ (__/__//__//_///_//_////____/\___/\___/\__,_/\__/\____/_/v0.0.1freelabz.comCeleryworkerisalive!╭────────Taskhttpx─────────╮│📜Description:DotMap() ││👷Workspace:default││🍐Targets:││•wikipedia.org││📌Options:││•follow_redirect:False││•threads:50││•debug_resp:False│╰─────────────────────────────╯[10:20:54] 🎉 Task httpx sent to Celery worker... _base.py:614
🏆Liveresults:🔗https://wikipedia.org [301] [301 Moved Permanently] [mw1415.eqiad.wmnet] [HSTS] [text/html] [234]
Deploy on multiple GCE instances
If you want a more scaleable architecture, we recommend deploying RabbitMQ, Redis, and Celery workers on different machines.
The steps are exactly the same as for Deploy on multiple GCE instances, expect that steps 2, 3, and 4 will each be run on separate GCE instance.
You can repeat step 4 on more instances to increase the number of workers.
Google Kubernetes Engine [WIP]
Cloud Run [WIP]
Axiom [WIP]
Axiom is a dynamic infrastructure framework to efficiently work with multi-cloud environments, build and deploy repeatable infrastructure focussed on offensive and defensive security.