- PERSONA
- MSSP Operator
- CATEGORY
- MSSP
- ENDPOINTS
- 4 used
- UPDATED
- April 2026
Continuous exposure monitoring for MSSP client portfolios
Per-seat VM pricing breaks the MSSP margin
- Enterprise VM tooling runs $8k–$15k/yr per client, with a minimum asset floor that assumes Fortune-500 sizing.
- Weekly scans across a 100-client book means either 100 separate tool tenancies or fragile scripts the analyst maintains.
- Findings arrive in N different schemas — each report has to be re-normalized before it hits the client portal.
- Rate limits on free OSINT APIs make 'loop over clients' impractical without hand-written backoff.
“Enterprise VM tooling runs $8k–$15k/yr per client, with a minimum asset floor that assumes Fortune-500 sizing.”
The endpoints that solve it
External exposure scan
Accepts a root domain. Returns enumerated subdomains, discovered services and open ports, TLS/certificate posture, and a risk-scored findings array. The single unit of work MSSPs loop over to monitor a client book.
Follow-on IOC enrichment
When an exposure scan surfaces a suspicious IP, hash, or domain, submit it (or a batch) to /v1/enrichment/lookup. Returns normalized verdicts aggregated across up to 11 sources per IP (VirusTotal, AbuseIPDB, GreyNoise, Shodan, Censys, OTX, urlscan, Pulsedive, and more).
Client-sector threat profile
Generate a concise written threat-actor briefing scoped to a client's sector and geography. Drops straight into the monthly client report without the analyst hand-curating a TTPs list.
Batch client loop
There is no special endpoint — you call /v1/exposure/scan once per client domain from your existing automation (Airflow, cron, n8n, GitHub Actions). One Bearer token, one rate-limit bucket, one normalized response schema to parse.
The canonical MSSP fan-out
# Weekly exposure sweep across every client domain
# CLIENTS.txt: one root domain per line
while IFS= read -r domain; do
curl -s https://api.dfir-lab.ch/v1/exposure/scan \
-H "Authorization: Bearer $DFIR_API_KEY" \
-H "Content-Type: application/json" \
-d "{\"target\": \"$domain\"}" \
> "./reports/$(date +%Y-%m-%d)/$domain.json"
done < CLIENTS.txt
# Each response includes:
# subdomains[], services[], tls_findings[], risk_score, severity_counts
# Optional: enrich any suspicious IPs that surfaced across the sweep
curl https://api.dfir-lab.ch/v1/enrichment/lookup \
-H "Authorization: Bearer $DFIR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"indicators": [
{ "type": "ip", "value": "203.0.113.42" },
{ "type": "domain", "value": "stale-dev.clientX.tld" }
]
}'- 01Step 01
Inventory
Pull the client list from your PSA (Autotask, ConnectWise, HaloPSA) or a flat CSV. Each row needs a root domain and the internal client ID.
- 02Step 02
Schedule
Run the fan-out on your cadence — weekly for most MSSPs, daily for high-value clients. Airflow / GitHub Actions / a plain cron are all fine; the API is stateless.
- 03Step 03
Scan
POST each domain to /v1/exposure/scan. Responses come back as normalized JSON per client, keyed by your internal client ID.
- 04Step 04
Diff + enrich
Diff this week's findings against last week's. New open ports, newly discovered subdomains, or newly suspicious IPs go through /v1/enrichment/lookup for reputation context.
- 05Step 05
Report
Write structured findings into your client portal. Generate the monthly narrative with /v1/ai/threat-profile so the written brief matches the client's sector without analyst hand-curation.
Pricing that tracks your workload
- 01
Small MSSP — 20 clients × weekly scan
20 × 4 × 10 credits = 800 credits/month (scans only)Fits Starter ($29, 500 credits) only with bi-weekly cadence; Professional ($99, 2,500 credits) fits weekly with ~1,700 credits left for enrichment and threat profiles. - 02
Mid-size MSSP — 50 clients × weekly scan + enrichment
(50 × 4 × 10) + (50 × 4 × 5 IOCs × 3) = 2,000 + 3,000 = 5,000 credits/monthExceeds Professional (2,500) — run Professional + a 5,000-credit top-up, or move to Enterprise for unlimited usage. - 03
Large MSSP — 150 clients × weekly scan + monthly threat profile
(150 × 4 × 10) + (150 × 1 × 20) = 6,000 + 3,000 = 9,000 credits/monthEnterprise tier (custom pricing, unlimited credits) is the only sensible fit at this volume.
Three ways to evaluate
Create a free account (100 credits/mo)
Full API access, dashboard, and your own credits. Includes everything the free tier offers.
Try /exposure-scanner — no signup
Run a single-domain exposure scan in the browser. Same engine as the API endpoint, rate-limited per IP — useful for showing a prospective client the output shape before wiring the fan-out into your orchestrator.
API reference
Full schema, error codes, rate limits, and copy-ready code snippets for every endpoint referenced above.
Frequently asked
- Q / 01
- Those are authenticated internal scanners with a per-asset license model. DFIR Platform's exposure endpoint is external-only (no agents, no credentials) and priced by credit, not per client. If you need authenticated internal scanning you still need a VM tool; if you need continuous external exposure monitoring across a large client book, this endpoint is built for that fan-out pattern.
- Q / 02
- The request is stateless — you control how findings are labelled by keying the response against your internal client ID when you write to your reporting store. The API does not currently have a first-party multi-tenant client model; that mapping lives in your orchestrator.
- Q / 03
- The endpoint uses passive sources (CT logs, DNS aggregators) plus lightweight active probing. It will reliably surface the public subdomain inventory and open services reachable from the internet. It is not a substitute for active port-scanning tools like masscan if you need exhaustive port coverage.
- Q / 04
- Paid tiers get a sensible default rate-limit bucket that covers hundreds of sequential scans over a few hours. If you want to parallelize aggressively or run sub-minute sweeps across large client books, the Enterprise tier raises limits and can add a dedicated pool.
- Q / 05
- Yes — the response is a normalized JSON schema. Most MSSPs write it into a Postgres / SQLite store and render from there. The fields are stable; versioned changes are announced in the changelog.
- Q / 06
- That is an Enterprise-tier conversation — custom branding and direct client API keys are not on the self-serve plans. Most MSSPs on Professional wrap the API behind their own portal instead, which is the cleaner model anyway.
Other teams solving adjacent problems
Stop triaging by hand.
Create a free account — 100 credits per month, no credit card. Or keep browsing to find the use case that matches your workflow.