Library usage

... or how you can use secator as a foundation to build powerful security software.

secator can also be used as a Python library.

We recommend using secator as a library when building complex systems around secator to overcome CLI limitations.


Running tasks, workflows, and scans

You can run any task supported by secator by simply importing it by name from secator.tasks.

You can run any workflow or scan by loading it's YAML configs and running it using the secator.runners.Workflow or secator.runners.Scan class.

from secator.template import TemplateLoader
from secator.runners import Workflow, Scan
from secator.tasks import subfinder, httpx, naabu

# Run simple tasks, chain them together
host = 'wikipedia.org'
subdomains = subfinder(host).run()
alive_urls = httpx(subdomains).run()
ports_open = naabu(subdomains).run()

# ... or run a workflow
config = TemplateLoader('workflows/host_recon')
results = Workflow(config).run()

# ... or run a scan
config = TemplateLoader('scans/domain')
results = Scan(config).run()

Consuming results live

All runners yield results in real-time, which means you can run use them as generators to consume their results:

For instance, you can consume results lazily using threads or a Celery task:

from threading import Thread
from secator.tasks.http import feroxbuster
from secator.output_types import Url, Tag
from .models import Urls, Tags

def process_url(url):
    Urls.objects.create(**url)
    print(f'Saved {url.url} [{url.status_code}] to database')
    
def process_tag(tag):
    Tags.objects.create(**tag)
    print(f'Found tag {tag.name} for target {tag.match}')

# Set the initial host
host = 'http://testphp.vulnweb.com'

# Use a task as a generator
for url in feroxbuster(host, rate_limit=100):
    Thread(target=process_url, args=(url,))

# Use a workflow as a generator
config = TemplateLoader('config/url_crawl')
workflow = Workflow(config, rate_limit=100)
for result in workflow:
  if isinstance(result, Url):
    Thread(target=process_url, args=(result,))
  elif isinstance(result, Tag):
    Thread(target=process_tag, args=(result,))

All tasks support being run like generators, but some of them have to wait for the command to finish before outputting results (e.g: nmap).


Overriding global options

Options specified with the name of the command name prefixed will override global options for that specific command.

For instance, if you want a global rate limit of 1000 (reqs/s), but for ffuf you want it to be 100 you can do so:

from secator.tasks.http import ffuf, gau, gospider, katana
host = 'wikipedia.org'
options = {
    'rate_limit': 1000, # reqs/s
    'ffuf.rate_limit': 100,
    'katana.rate_limit': 30
}
for tool in [ffuf, gau, gospider, katana]:
    tool(host, **options)

In the example above:

  • gau, and gospider will have a rate limit of 1000 requests / second.

  • ffuf will have a rate limit of 100 requests / second.

  • katana will have a rate limit of 30 requests / second.


Disabling default options

Sometimes you might wish to omit passing the option and use the command defaults. You can set the option to False in order to do this.

options = {
    'rate_limit': 1000, # reqs/s
    'ffuf.rate_limit': False, # explicitely disabling `rate_limit` option, will use ffuf defaults
}

Last updated