Integrating an external command
... or how to turn a command that you use daily into an overpowered machine.
Creating a task file
Imagine we have a tool named mytool that we want to integrate with secator.
Start by creating a file named mytool.py:
from secator.decorators import task # required for `secator` to recognize tasks
from secator.runners import Command # the `secator` runner to use
@task()
class mytool(Command): # make sure class name is lowercase and matches the filename.
cmd = 'mytool' # ... or whatever the name of your external command is.
Move this file over to:
~/.secator/templates/(or whatever yourdirs.templatesin Configuration points to)
OR
secator/tasks/if you have a Development setup and want to contribute your task implementation to the officialsecatorrepository.
Adding an input flag [optional]
If your tool requires an input flag or a list flag to take its targets, for instance:
mytool -u TARGETmytool -l TXT_FILE
You need to set the input_flag and file_flag class options:
Setting these attributes allows us to run mytool with secator like:
Parsing a command's output
Now that you have a basic implementation working, you need to convert your command's output into structured output (JSON).
Find out what your command's output looks like and pick the corresponding guide:
Read Parsing JSON lines if your tool has an option to stream JSON lines (preferred).
Read Parsing output files if your tool has an option to output to a file (e.g JSON or CSV).
Read Parsing raw standard output if your tools only outputs to
stdout.
Adding more options [optional]
To support more options, you can use the opt_prefix, opts , opt_key_map and opt_value_map attributes.
Assuming mytool has the --delay, --debug and --include-tags options, we would support them this way:
With this config, running either of:
will result in running mytool like:
Adding an install command [optional]
To support installing your tool with secator, you can set the install_cmd , and / or install_github_handle attributes:
Now you can install mytool using:
Using a category [optional]
If your tool fits into one of secator's built-in command categories, you can inherit from it's option set:
Http: A tool that makes HTTP requests.HttpCrawler: A command that crawls URLs (subset ofHttp).HttpFuzzer: A command that fuzzes URLs (subset ofHttp).
You can inherit from these categories and map their options to your command.
For instance, if mytool is an HTTP fuzzer, we would change it's implementation like:
With this config, running:
would list:
The
metaoptions in theHTTPFuzzercategory that are supported bymytool.The options only usable by
mytool.
For instance, running:
would result in running mytool like:
Supporting proxies [optional]
If your tool supports proxies, secator has first-class support for proxychains, HTTP and SOCKS5 proxies, and can dynamically choose the type of proxy to use based on the following attributes:
proxy_socks5: boolean indicating if your command supportsSOCKS5proxies.proxy_http: boolean indicating if your command supportsHTTP/HTTPSproxies.proxychains: boolean indicating if your command supports being run withproxychains.
If your proxy supports SOCKS5or HTTP proxies, make sure to have an option called proxy in your opts definition or it won't be picked up.
If your proxy supports proxychains, secator will use the local proxychains binary and proxychains.conf configuration, so make sure those are functional.
Example:
Assuming mytool does not support HTTP or SOCKS5 proxies, but works with proxychains, you can update your task definition like:
With the above configuration, running with -proxy <VALUE> would result in the following behaviour:
becomes:
becomes:
becomes:
Hooking onto runner lifecycle
You can hook onto any part of the runner lifecycle by override the hooks methods (read Lifecyle hooks to know more).
Example:
Chunking
secator allows to chunk a task into multiple children tasks when the length of the input grows, or some other specific requirements (e.g: your command only takes one target at a time).
Chunking only works when Distributed runs with Celery are enabled.
You can specify the chunk size using the input_chunk_size attribute:
With this config, running:
would result in:
Last updated
Was this helpful?