hack3rs.ca network-security
/threats/ddos-and-service-exhaustion-attacks :: AV-09

analyst@hack3rs:~/threats$ open ddos-and-service-exhaustion-attacks

AV-09 · DDoS and service exhaustion attacks

Distributed denial-of-service and service exhaustion attacks aim to degrade availability by overwhelming network paths, applications, or supporting resources such as DNS, load balancers, and upstream dependencies.

$ action: Prepare upstream mitigation, rate limiting, capacity plans, and runbooks that prioritize service continuity, traffic visibility, and fast escalation.

1. Why Availability Attacks Exist

DDoS and service exhaustion attacks exist because disrupting availability can be cheaper and faster than breaching systems. Attackers may seek extortion, distraction, retaliation, political signaling, or operational disruption.

Availability attacks target different layers: network bandwidth saturation, protocol/resource exhaustion, or application-layer request floods. A defender's response depends on which layer is under pressure and which dependencies are failing first.

These attacks are common because many organizations under-invest in runbooks and upstream coordination. Even when mitigation services exist, slow recognition and escalation can prolong downtime.

2. What Attackers Can Do in DDoS Scenarios

Attackers can flood edge links, overwhelm firewalls/load balancers, exhaust server worker pools, abuse expensive application endpoints, or pressure DNS and supporting services. Some campaigns also combine DDoS with intrusion attempts while defenders are distracted.

Service exhaustion can come from malformed or asymmetric request patterns, not just raw volume. A lower-bandwidth attack can still be highly effective if it targets expensive operations or weak protocol handling.

Defenders should monitor not only traffic volume but also latency, error rates, connection states, request distribution, and dependency health to understand impact quickly.

3. How Defenders Mitigate and Respond

Prepare layered mitigation: upstream ISP/cloud scrubbing relationships, CDN/WAF protections, rate limits, autoscaling/traffic shaping where appropriate, and application protections for expensive endpoints. No single control fits every DDoS scenario.

Build clear escalation paths and thresholds. Teams need to know when to contact upstream providers, when to enable emergency rate limits, and which services are prioritized for continuity.

After the event, review what failed first (capacity, filtering, runbook timing, observability) and improve instrumentation and rehearsals. DDoS resilience is as much operations engineering as it is security.

detection-signals

  • $Sudden traffic volume spikes or connection floods to public services.
  • $Rapid increase in 5xx errors, timeouts, or load balancer/backend health failures.
  • $Unusual distribution of requests (single endpoint focus, bursty patterns, odd user agents/sources).
  • $SYN backlog/connection state pressure or protocol-level anomalies.
  • $Service degradation across dependent systems (DNS, auth, APIs) during traffic surges.

telemetry-sources

  • $Firewall/load balancer/CDN/WAF metrics and logs.
  • $NetFlow/sFlow/Zeek/Suricata for traffic composition and protocol behavior.
  • $Application metrics (latency, error rates, worker saturation, queue depth).
  • $DNS and upstream provider dashboards/alerts.
  • $System metrics (CPU, memory, sockets, connection tables) on edge services.

recommended-tools-and-guides

lab-safe-detection-workflows

These commands are for learning, validation, and defensive triage in your own lab or authorized environment. Adapt to your tooling and log locations.

Traffic composition sampling during an availability event (authorized)

sudo tcpdump -ni any -c 500 'host 203.0.113.20 and (tcp or udp)' -w ddos-sample.pcap
tshark -r ddos-sample.pcap -q -z io,phs
tshark -r ddos-sample.pcap -z conv,tcp -q

$ why: Packet samples help determine whether the issue is SYN-heavy, UDP-heavy, app-layer request concentration, or mixed behavior requiring different mitigation steps.

Service health and incident tracking (ops-oriented)

printf "time,service,latency_ms,error_rate,traffic_signal,mitigation_action,owner\n" > ddos-response-log.csv
ss -s
netstat -ant 2>/dev/null | awk 'NR>2 {print $6}' | sort | uniq -c | sort -nr | head -20 || true

$ why: Availability response requires both traffic analysis and service health tracking to know whether mitigations are improving user impact.

triage-questions

  • ?What is failing first: edge bandwidth, load balancer, application workers, DNS, or a dependency?
  • ?Is the attack primarily network-layer, protocol-state exhaustion, or application-layer?
  • ?What upstream/CDN/WAF mitigations can be enabled now and who owns escalation?
  • ?Which business services must be protected first if capacity is limited?
  • ?Is there concurrent suspicious activity (login attempts, exploitation) during the DDoS distraction window?

defender-actions.checklist

  • $Document DDoS escalation paths with ISP/CDN/WAF providers.
  • $Define emergency rate-limit and filtering options in runbooks.
  • $Monitor traffic composition and application health, not just bandwidth.
  • $Practice availability incident drills and communication workflows.
  • $Review dependencies (DNS, auth, APIs) for single-point exhaustion risks.

study-workflow

  1. Learn what normal behavior looks like for this area (auth, exposure, config, or internal traffic).
  2. Identify the logs and telemetry that should show the behavior.
  3. Practice one safe validation in a lab or authorized environment.
  4. Write a short playbook for detection, triage, and response.
  5. Review the related tool guides under /learning/tools.