Integrating an external command
... or how to turn a command that you use daily into an overpowered machine.
Creating a task file
Imagine we have a tool named mytool
that we want to integrate with secator
.
Start by creating a file named mytool.py
:
The task
decorator is required for secator
to recognize class-based definition that need to be loaded at runtime.
Move this file over to:
~/.secator/templates/
(or whatever yourdirs.templates
in Configuration points to)
OR
secator/tasks/
if you have a Development setup and want to contribute your task implementation to the officialsecator
repository.
Adding an input flag [optional]
If your tool requires an input flag or a list flag to take its targets, for instance:
mytool -u TARGET
mytool -l TXT_FILE
You need to set the input_flag
and file_flag
class options:
Setting these attributes allows us to run mytool
with secator
like:
Parsing a command's output
Now that you have a basic implementation working, you need to convert your command's output into structured output (JSON).
Find out what your command's output looks like and pick the corresponding guide:
Read Parsing JSON lines if your tool has an option to stream JSON lines (preferred).
Read Parsing output files if your tool has an option to output to a file (e.g JSON or CSV).
Read Parsing raw standard output if your tools only outputs to
stdout
.
Adding more options [optional]
To support more options, you can use the opt_prefix
, opts
, opt_key_map
and opt_value_map
attributes.
Assuming mytool
has the --delay
, --debug
and --include-tags
options, we would support them this way:
With this config, running either of:
will result in running mytool
like:
Adding an install command [optional]
To support installing your tool with secator, you can set the install_cmd
, and / or install_github_handle
attributes:
If install_github_handle
is set, secator
will try to fetch a binary from GitHub releases specific to your platform, and fallback to install_cmd
if it cannot find a suitable release, or if the API rate limit is reached.
Now you can install mytool
using:
Using a category [optional]
If your tool fits into one of secator
's built-in command categories, you can inherit from it's option set:
Http
: A tool that makes HTTP requests.HttpCrawler
: A command that crawls URLs (subset ofHttp
).HttpFuzzer
: A command that fuzzes URLs (subset ofHttp
).
You can inherit from these categories and map their options to your command.
Categories are defined in secator/tasks/_categories.py
For instance, if mytool
is an HTTP fuzzer, we would change it's implementation like:
Make sure you map all the options from the HTTPFuzzer
category. If some options are not supported by your tool, mark them with OPT_NOT_SUPPORTED
.
With this config, running:
would list:
The
meta
options in theHTTPFuzzer
category that are supported bymytool
.The options only usable by
mytool
.
For instance, running:
would result in running mytool
like:
Supporting proxies [optional]
If your tool supports proxies, secator
has first-class support for proxychains
, HTTP
and SOCKS5
proxies, and can dynamically choose the type of proxy to use based on the following attributes:
proxy_socks5
: boolean indicating if your command supportsSOCKS5
proxies.proxy_http
: boolean indicating if your command supportsHTTP
/HTTPS
proxies.proxychains
: boolean indicating if your command supports being run withproxychains
.
If your proxy supports SOCKS5
or HTTP
proxies, make sure to have an option called proxy
in your opts
definition or it won't be picked up.
If your proxy supports proxychains
, secator
will use the local proxychains
binary and proxychains.conf
configuration, so make sure those are functional.
Read Proxiesfor more details on how proxies work and how to configure them properly.
Example:
Assuming mytool
does not support HTTP or SOCKS5 proxies, but works with proxychains
, you can update your task definition like:
With the above configuration, running with -proxy <VALUE>
would result in the following behaviour:
becomes:
Hooking onto runner lifecycle
Example:
Chunking
secator
allows to chunk a task into multiple children tasks when the length of the input grows, or some other specific requirements (e.g: your command only takes one target at a time).
Chunking only works when Distributed runs with Celery are enabled.
You can specify the chunk size using the input_chunk_size
attribute:
With this config, running:
would result in:
If mytool
did not support file input (i.e: file_flag
not defined in the task definition class), the above would still work with an input_chunk_size = 1
, thus splitting into one command per target passed.
Last updated