Skip to main content
Technical Guides 20 min read Jan 8, 2025

Advanced SIEM Detection Rules
for Modern Threats

Build sophisticated detection rules for living-off-the-land attacks, supply chain compromises, and advanced persistent threats with our proven SIEM configurations.

TK

Thomas Kim

Detection Engineer with 10+ years building threat detection systems. Former SOC analyst and incident responder, now specializing in behavioral detection and threat hunting methodologies.

What You'll Learn

  • Detection engineering principles for modern threat landscapes
  • Ready-to-deploy SIEM rules for living-off-the-land techniques
  • Behavioral analytics approaches for APT detection
  • Tuning strategies to minimize false positives while maintaining coverage

Modern Detection Engineering Principles

Traditional signature-based detection fails against sophisticated adversaries who leverage legitimate tools and living-off-the-land techniques. Modern detection engineering focuses on identifying behavioral patterns, anomalous combinations of legitimate activities, and deviations from established baselines.

Behavioral Detection

Focus on sequences of activities and contextual relationships rather than individual events. This approach catches attackers using legitimate tools in malicious ways.

  • Process execution chains and parent-child relationships
  • Network communication patterns and timing
  • Credential usage and privilege escalation sequences

Statistical Baselines

Establish normal behavior baselines to identify statistical outliers that may indicate malicious activity or insider threats.

  • User authentication patterns and geolocation
  • Data access volumes and timing patterns
  • Network traffic patterns and protocol usage

Living-Off-The-Land Detection Rules

PowerShell Abuse Detection

Detect malicious PowerShell usage while minimizing false positives from legitimate administrative activities.

# Splunk Detection Rule: Suspicious PowerShell Command Execution
index=windows source="WinEventLog:Microsoft-Windows-PowerShell/Operational" EventCode=4104
| eval cmdline=lower(ScriptBlockText)
| where (
    (match(cmdline, ".*-encodedcommand.*") AND match(cmdline, ".*bypass.*")) OR
    (match(cmdline, ".*invoke-expression.*") AND match(cmdline, ".*downloadstring.*")) OR
    (match(cmdline, ".*iex.*") AND match(cmdline, ".*new-object.*net\.webclient.*")) OR
    match(cmdline, ".*powershell.*-windowstyle hidden.*-executionpolicy bypass.*")
)
| eval risk_score=case(
    match(cmdline, ".*-encodedcommand.*bypass.*"), 85,
    match(cmdline, ".*invoke-expression.*downloadstring.*"), 90,
    1==1, 75
)
| where risk_score >= 75
| table _time, ComputerName, User, cmdline, risk_score
| sort -_time

Detection Logic Explanation:

  • • Identifies base64-encoded commands with execution policy bypass
  • • Detects download-and-execute patterns using PowerShell
  • • Applies risk scoring to prioritize most dangerous combinations
  • • Filters out low-risk administrative activities

WMI Persistence Detection

Identify attackers using WMI for persistence and lateral movement, a common technique in advanced attacks.

# Windows Event Log Query: Suspicious WMI Activity
# Monitor for WMI EventFilter, EventConsumer, and FilterToConsumerBinding creation
source="WinEventLog:Microsoft-Windows-WMI-Activity/Operational"
(EventCode=5857 OR EventCode=5858 OR EventCode=5860 OR EventCode=5861)
AND (
    Operation="*EventFilter*" OR
    Operation="*EventConsumer*" OR
    Operation="*FilterToConsumerBinding*"
)
# Exclude known legitimate WMI operations
NOT (User="SYSTEM" AND ProcessName="*wmiprvse.exe*")
| stats count by ComputerName, User, Operation, ProcessName
| where count > 1
| sort -count

WMI Persistence Indicators:

  • • WMI EventFilter creation for trigger conditions
  • • CommandLineEventConsumer for payload execution
  • • FilterToConsumerBinding to connect triggers and payloads
  • • Unusual WMI namespace modifications

Credential Dumping Detection

Detect attempts to extract credentials from memory or the Windows registry using various techniques.

# QRadar AQL: LSASS Memory Access Detection
SELECT
    sourceip,
    destinationip,
    username,
    "Process Name",
    "Command Line",
    starttime
FROM events
WHERE
    LOGSOURCETYPENAME(devicetype) = 'Microsoft Windows Security Event Log'
    AND eventid = 4656
    AND "Object Name" LIKE '%lsass.exe%'
    AND "Access Mask" = '0x1010'
    AND "Process Name" NOT IN (
        'C:\\Windows\\System32\\wbem\\wmiprvse.exe',
        'C:\\Windows\\System32\\csrss.exe',
        'C:\\Windows\\System32\\wininit.exe'
    )
    AND starttime > (CURRENT_TIMESTAMP - INTERVAL '1' HOUR)
ORDER BY starttime DESC

Credential Access Techniques Detected:

  • • Direct LSASS process memory access (Mimikatz-style)
  • • Registry SAM hive dumping attempts
  • • DCSync attacks against domain controllers
  • • Unusual access to credential storage locations

Supply Chain Compromise Detection

Software Supply Chain Monitoring

Detect unauthorized modifications to software packages, unexpected network communications from build systems, and suspicious package installations.

# Sumo Logic: Suspicious Package Installation Detection
_sourceCategory=linux/packages OR _sourceCategory=windows/software
| where (
    (package_name matches "*typosquatting_keywords*") OR
    (download_url matches "*suspicious_domains*") OR
    (installer_hash not in known_good_hashes) OR
    (installation_time outside business_hours)
)
| lookup package_reputation on package_name
| where reputation = "unknown" OR reputation = "suspicious"
| stats count by package_name, source_host, installation_time
| where count > threshold_value
| sort -installation_time

Supply Chain Risk Indicators:

  • • Package names similar to popular libraries (typosquatting)
  • • Downloads from suspicious or newly registered domains
  • • Software installations during off-hours
  • • Packages with unknown or suspicious reputation scores

Build System Compromise Detection

Monitor CI/CD pipelines for unauthorized changes, suspicious network activity, and unexpected privilege escalations.

# Splunk: CI/CD Pipeline Anomaly Detection
index=cicd source="jenkins" OR source="gitlab-ci" OR source="github-actions"
| eval risk_factors=0
| eval risk_factors=if(match(raw, ".*curl.*|.*wget.*"), risk_factors+20, risk_factors)
| eval risk_factors=if(match(raw, ".*base64.*decode.*"), risk_factors+25, risk_factors)
| eval risk_factors=if(match(raw, ".*sudo.*|.*elevated.*"), risk_factors+15, risk_factors)
| eval risk_factors=if(match(raw, ".*external.*domain.*"), risk_factors+30, risk_factors)
| where risk_factors >= 40
| lookup build_baseline on pipeline_name, user
| where deviation_score > 2.5
| table _time, pipeline_name, user, risk_factors, deviation_score, raw
| sort -risk_factors

Advanced Persistent Threat (APT) Detection

Behavioral Chain Analysis

Detect APT campaigns by identifying sequences of activities that collectively indicate sophisticated adversary behavior.

# Multi-Stage APT Detection Logic (Pseudocode)
# Stage 1: Initial Compromise Indicators
initial_compromise = [
    "spearphishing_attachment",
    "exploit_public_facing_app", 
    "supply_chain_compromise"
]

# Stage 2: Persistence Establishment  
persistence_indicators = [
    "registry_run_keys",
    "scheduled_tasks",
    "wmi_persistence",
    "service_creation"
]

# Stage 3: Credential Access
credential_access = [
    "lsass_memory_dump",
    "credential_dumping", 
    "brute_force_attacks"
]

# Stage 4: Lateral Movement
lateral_movement = [
    "remote_services",
    "admin_shares",
    "pass_the_hash"
]

# Stage 5: Data Collection
data_collection = [
    "automated_collection",
    "clipboard_data",
    "screen_capture"
]

# Correlation Rule: Alert if 3+ stages detected within 24 hours
if (stages_detected >= 3 AND time_window <= 24h):
    generate_alert("Possible APT Campaign", severity="HIGH")

Command and Control Detection

Identify sophisticated C2 communications that use legitimate protocols and services to evade detection.

# Advanced C2 Detection Using Machine Learning Features
index=network source="firewall" OR source="proxy"
| eval features = ""
| eval features = features + "," + if(bytes_out/bytes_in > 10, "high_upload_ratio", "")
| eval features = features + "," + if(duration < 60 AND connections > 100, "short_frequent_connections", "")
| eval features = features + "," + if(match(dest_domain, ".*\.(tk|ml|ga|cf)$"), "suspicious_tld", "")
| eval features = features + "," + if(ssl_cert_validity < 30, "short_ssl_cert", "")
| eval features = features + "," + if(user_agent matches "unusual_patterns", "suspicious_ua", "")

# Apply ML model scoring
| apply ml_model_c2_detection features
| where ml_score > 0.85
| sort -ml_score
| table _time, src_ip, dest_domain, ml_score, features

Detection Rule Tuning and Optimization

False Positive Reduction

Implement systematic approaches to reduce false positives while maintaining detection coverage.

  • Whitelist known-good processes and behaviors
  • Implement time-based filtering for business hours
  • Use statistical thresholds based on historical data
  • Context-aware alerting with environmental factors

Coverage Assessment

Regularly evaluate detection coverage against emerging threats and attack techniques.

  • Map detection rules to MITRE ATT&CK framework
  • Conduct purple team exercises for validation
  • Monitor threat intelligence for new techniques
  • Regular review of alert effectiveness metrics

Detection Rule Lifecycle Management

Establish processes for continuously improving detection capabilities based on threat landscape changes and organizational needs.

Development

  • • Threat research and analysis
  • • Rule design and testing
  • • False positive assessment

Deployment

  • • Staging environment validation
  • • Gradual production rollout
  • • Performance impact monitoring

Monitoring

  • • Alert volume tracking
  • • True/false positive ratios
  • • Detection effectiveness metrics

Optimization

  • • Rule tuning and refinement
  • • Coverage gap analysis
  • • Retirement of obsolete rules

Implementation Roadmap

1

Baseline Establishment (Week 1-2)

Deploy foundational detection rules and establish normal behavior baselines

2

Advanced Rule Deployment (Week 3-4)

Implement behavioral and machine learning-based detection rules

3

Tuning and Optimization (Week 5-8)

Fine-tune rules based on observed false positive rates and coverage gaps

4

Validation and Documentation (Week 9-12)

Conduct purple team exercises and document playbooks for detected threats