hack3rs.ca network-security
/learning/tools/john-the-ripper :: tool-guide-12

defender@hack3rs:~/learning/tools$ open john-the-ripper

John the Ripper

Password auditing & hash analysis

John the Ripper is a password auditing and recovery tool used by defenders in authorized environments to validate password strength, analyze hash formats, and support credential security remediation workflows.

how-to-learn-this-tool-like-a-defender

Study the tool in layers: first what problem it solves, then how to run it safely, then how to interpret output, and finally how to combine it with other evidence. This is how beginners become reliable analysts.

  • $Know when the tool is the right choice (and when it is not).
  • $Run a safe baseline command in a lab or authorized environment.
  • $Interpret the output in context instead of treating it as truth by itself.
  • $Correlate with other evidence sources (logs, packets, assets, owner context).
  • $Document findings and next actions so another analyst can reproduce your work.

preflight-checklist-before-using-tool

  • $Confirm authorization, target scope, and acceptable impact before running commands.
  • $Define the question first (troubleshooting, validation, hunting, triage, remediation proof).
  • $Identify the evidence source you will use to confirm or challenge tool output.
  • $Record time, host, interface/segment, and command used so results are reproducible.
  • $Decide what 'normal' should look like before testing edge cases or suspicious behavior.

how-experts-read-output

  • $Field recognition: Which fields actually matter for the question you asked?
  • $Scope validation: Does this output represent the host/segment/time window you intended?
  • $Confidence check: Is this direct evidence, inference, or a heuristic guess?
  • $Correlation step: Which second source should confirm this result (logs, PCAP, ticket, CMDB, host telemetry)?
  • $Decision step: What action should follow (close, escalate, tune, scan deeper, validate manually)?

official-links

ethical-use-and-defense-scope

Use John the Ripper only in authorized password auditing, IR validation, or recovery workflows. Hashes and recovered passwords are highly sensitive and require strict access, storage, and retention controls.

Do not test hashes from systems or accounts outside documented scope. Password auditing should be governed by written approval, clear objectives, and remediation commitments.

The defensive outcome should be stronger controls: better password policy, MFA coverage, privileged account review, and credential hygiene, not just recovered-password statistics.

tool-history-origin-and-purpose

  • $When created: Created in the mid-1990s (originally by Solar Designer; early releases commonly cited in 1996).
  • $Why it was created: Administrators and defenders needed a way to test password strength against real hashes and detect weak credentials before adversaries could exploit them.

John the Ripper was created as a password security auditing and recovery tool to help identify weak password hashes and improve system password security.

why-defenders-still-use-it

Defenders use John the Ripper for authorized password audits, training, and validation of password policies and hash handling practices. It remains useful for teaching password cracking concepts, hash formats, and defensive remediation planning.

How the tool evolved
  • +Expanded with jumbo/community variants, more hash format support, and broader platform use.
  • +Became a staple in password auditing and DFIR training workflows.
  • +Still valuable as a teaching tool for credential security fundamentals and policy validation.

when-this-tool-is-a-good-fit

  • +Authorized internal password audits and credential hygiene reviews.
  • +IR validation after hash exposure or compromise scenarios.
  • +Training on hash formats, salts, and password policy effectiveness.
  • +Remediation planning for weak passwords on high-value accounts.

when-to-use-another-tool-or-source

  • !When you need host process/user context, pair with endpoint or OS logs.
  • !When you need ownership and business impact, pair with CMDB/ticketing/asset context.
  • !When the tool output is ambiguous, validate using a second evidence source before concluding.
  • !When production risk is high, test in a lab first and use change coordination.

1. What John the Ripper Solves for Defenders

John the Ripper helps defenders test whether stored password hashes and password policies are resilient against realistic password guessing techniques in authorized scenarios.

It is also valuable for hash format handling and education. Many defenders first learn how different hash formats behave, how salts matter, and why strong password storage parameters change attack cost through John-based labs.

Used correctly, John supports evidence-based credential risk management and helps teams prioritize remediation after password database exposure or internal password audits.

2. Defensive Workflow: From Hash Input to Remediation

Start by validating hash format and input integrity. Misparsed or mixed hash sources can waste time and produce bad conclusions. Use format checks and small test runs before scaling.

Define the test plan and budget: wordlist-only, rules, incremental mode, or targeted masks in a lab. Record what you attempted so stakeholders understand the meaning of both successes and failures.

Translate outcomes into risk and action. Which accounts were weak, which were privileged, which controls failed, and which remediation steps are now required?

3. Comparing John the Ripper and Hashcat in Teaching

John the Ripper and Hashcat both support password auditing, but they teach slightly different operator habits and workflows. Learning both helps defenders understand hash formats, cracking strategies, and audit planning more deeply.

For a teaching site, emphasize common defensive principles across both tools: authorization, scope, evidence handling, remediation focus, and careful interpretation of results.

The tool choice matters less than the discipline of the audit process and how well the findings are turned into improved credential security.

4. Expert Habits for Password Audit Work

Experts document format assumptions, command choices, and test budgets. They do not hide uncertainty or present a limited test as proof of strong passwords.

They classify accounts by criticality and tie recovered credentials to immediate controls: resets, MFA, service account review, and detection updates.

They also train teams on the lessons learned so password security improves organization-wide rather than only in the audited dataset.

scenario-teaching-playbooks

Use these scenario patterns to practice choosing the tool appropriately. The point is not just running commands; it is learning when and why the tool helps in a real defensive workflow.

1. Authorized internal password audits and credential hygiene reviews.

Suggested starting block: Format Detection And Baseline Audit

  • $Define the question you are trying to answer and the scope you are allowed to inspect.
  • $Collect baseline evidence using the selected command block.
  • $Interpret the result using known-good behavior and environment context.
  • $Correlate with another source (host logs, SIEM, tickets, inventory, or packet data).
  • $Record findings, confidence level, and the next defensive action.

2. IR validation after hash exposure or compromise scenarios.

Suggested starting block: Rules And Session Handling

  • $Define the question you are trying to answer and the scope you are allowed to inspect.
  • $Collect baseline evidence using the selected command block.
  • $Interpret the result using known-good behavior and environment context.
  • $Correlate with another source (host logs, SIEM, tickets, inventory, or packet data).
  • $Record findings, confidence level, and the next defensive action.

3. Training on hash formats, salts, and password policy effectiveness.

Suggested starting block: Defensive Findings Tracking

  • $Define the question you are trying to answer and the scope you are allowed to inspect.
  • $Collect baseline evidence using the selected command block.
  • $Interpret the result using known-good behavior and environment context.
  • $Correlate with another source (host logs, SIEM, tickets, inventory, or packet data).
  • $Record findings, confidence level, and the next defensive action.

4. Remediation planning for weak passwords on high-value accounts.

Suggested starting block: Format Detection And Baseline Audit

  • $Define the question you are trying to answer and the scope you are allowed to inspect.
  • $Collect baseline evidence using the selected command block.
  • $Interpret the result using known-good behavior and environment context.
  • $Correlate with another source (host logs, SIEM, tickets, inventory, or packet data).
  • $Record findings, confidence level, and the next defensive action.

cli-workflows

Practical defensive workflows and lab-safe commands. Validate in a sandbox or authorized environment before using them in production.

cli-walkthroughs-with-expected-output

Start with one representative command from each workflow block. Read the sample output and explanation so you know what to look for when you run it yourself.

Format Detection And Baseline Audit

Beginner
Command
john --list=formats | head -n 30
Example Output
# review output for expected fields, errors, and warnings
# compare against a known-good baseline in your environment

$ how to read it: Check for expected fields first, then validate whether the output actually answers your question. If not, refine scope or collect a second evidence source before concluding.

Rules And Session Handling

Intermediate
Command
john --wordlist=wordlist.txt --rules hashes.txt --session=jtr-audit
Example Output
# review output for expected fields, errors, and warnings
# compare against a known-good baseline in your environment

$ how to read it: Check for expected fields first, then validate whether the output actually answers your question. If not, refine scope or collect a second evidence source before concluding.

Defensive Findings Tracking

Advanced
Command
mkdir -p jtr-audit/{inputs,outputs,notes}
Example Output
# review output for expected fields, errors, and warnings
# compare against a known-good baseline in your environment

$ how to read it: Check for expected fields first, then validate whether the output actually answers your question. If not, refine scope or collect a second evidence source before concluding.

command-anatomy-and-expert-usage

This breaks down each command so learners understand intent, risk, and interpretation. Expert use is not about memorizing syntax; it is about selecting the right command for the right question and reading the result correctly.

Format Detection And Baseline Audit

Beginner
Command
john --list=formats | head -n 30
Command Anatomy
  • $Base command: john
  • $Primary arguments/options: --list=formats | head -n 30
  • $Operator goal: run this command only when it answers a clear defensive question.
Use And Risk

$ intent: Quick evidence extraction from logs or command output.

$ risk: Review command impact before running; validate in lab first if uncertain.

$ learning focus: Baseline command: learn what normal output looks like.

Show sample output and interpretation notes
# review output for expected fields, errors, and warnings
# compare against a known-good baseline in your environment

$ expert reading pattern: Confirm the output matches your intended scope, identify the key fields, then validate with a second source before making decisions.

Format Detection And Baseline Audit

Beginner
Command
john --wordlist=wordlist.txt hashes.txt
Command Anatomy
  • $Base command: john
  • $Primary arguments/options: --wordlist=wordlist.txt hashes.txt
  • $Operator goal: run this command only when it answers a clear defensive question.
Use And Risk

$ intent: Collect, validate, or document evidence in a defensive workflow.

$ risk: Review command impact before running; validate in lab first if uncertain.

$ learning focus: Intermediate step: refine scope or extract more useful evidence.

Show sample output and interpretation notes
# review output for expected fields, errors, and warnings
# compare against a known-good baseline in your environment

$ expert reading pattern: Confirm the output matches your intended scope, identify the key fields, then validate with a second source before making decisions.

Format Detection And Baseline Audit

Beginner
Command
john --show hashes.txt
Command Anatomy
  • $Base command: john
  • $Primary arguments/options: --show hashes.txt
  • $Operator goal: run this command only when it answers a clear defensive question.
Use And Risk

$ intent: Collect, validate, or document evidence in a defensive workflow.

$ risk: Review command impact before running; validate in lab first if uncertain.

$ learning focus: Advanced step: use after baseline and validation are understood.

Show sample output and interpretation notes
# review output for expected fields, errors, and warnings
# compare against a known-good baseline in your environment

$ expert reading pattern: Confirm the output matches your intended scope, identify the key fields, then validate with a second source before making decisions.

Rules And Session Handling

Intermediate
Command
john --wordlist=wordlist.txt --rules hashes.txt --session=jtr-audit
Command Anatomy
  • $Base command: john
  • $Primary arguments/options: --wordlist=wordlist.txt --rules hashes.txt --session=jtr-audit
  • $Operator goal: run this command only when it answers a clear defensive question.
Use And Risk

$ intent: Collect, validate, or document evidence in a defensive workflow.

$ risk: Review command impact before running; validate in lab first if uncertain.

$ learning focus: Baseline command: learn what normal output looks like.

Show sample output and interpretation notes
# review output for expected fields, errors, and warnings
# compare against a known-good baseline in your environment

$ expert reading pattern: Confirm the output matches your intended scope, identify the key fields, then validate with a second source before making decisions.

Rules And Session Handling

Intermediate
Command
john --restore=jtr-audit
Command Anatomy
  • $Base command: john
  • $Primary arguments/options: --restore=jtr-audit
  • $Operator goal: run this command only when it answers a clear defensive question.
Use And Risk

$ intent: Collect, validate, or document evidence in a defensive workflow.

$ risk: Review command impact before running; validate in lab first if uncertain.

$ learning focus: Intermediate step: refine scope or extract more useful evidence.

Show sample output and interpretation notes
# review output for expected fields, errors, and warnings
# compare against a known-good baseline in your environment

$ expert reading pattern: Confirm the output matches your intended scope, identify the key fields, then validate with a second source before making decisions.

Rules And Session Handling

Intermediate
Command
john --status=jtr-audit
Command Anatomy
  • $Base command: john
  • $Primary arguments/options: --status=jtr-audit
  • $Operator goal: run this command only when it answers a clear defensive question.
Use And Risk

$ intent: Collect, validate, or document evidence in a defensive workflow.

$ risk: Review command impact before running; validate in lab first if uncertain.

$ learning focus: Advanced step: use after baseline and validation are understood.

Show sample output and interpretation notes
# review output for expected fields, errors, and warnings
# compare against a known-good baseline in your environment

$ expert reading pattern: Confirm the output matches your intended scope, identify the key fields, then validate with a second source before making decisions.

Defensive Findings Tracking

Advanced
Command
mkdir -p jtr-audit/{inputs,outputs,notes}
Command Anatomy
  • $Base command: mkdir
  • $Primary arguments/options: -p jtr-audit/{inputs,outputs,notes}
  • $Operator goal: run this command only when it answers a clear defensive question.
Use And Risk

$ intent: Collect, validate, or document evidence in a defensive workflow.

$ risk: Review command impact before running; validate in lab first if uncertain.

$ learning focus: Baseline command: learn what normal output looks like.

Show sample output and interpretation notes
# review output for expected fields, errors, and warnings
# compare against a known-good baseline in your environment

$ expert reading pattern: Confirm the output matches your intended scope, identify the key fields, then validate with a second source before making decisions.

Defensive Findings Tracking

Advanced
Command
printf "account,result,criticality,action,owner\n" > jtr-audit/notes/findings.csv
Command Anatomy
  • $Base command: printf
  • $Primary arguments/options: "account,result,criticality,action,owner\n" > jtr-audit/notes/findings.csv
  • $Operator goal: run this command only when it answers a clear defensive question.
Use And Risk

$ intent: Collect, validate, or document evidence in a defensive workflow.

$ risk: Review command impact before running; validate in lab first if uncertain.

$ learning focus: Intermediate step: refine scope or extract more useful evidence.

Show sample output and interpretation notes
# review output for expected fields, errors, and warnings
# compare against a known-good baseline in your environment

$ expert reading pattern: Confirm the output matches your intended scope, identify the key fields, then validate with a second source before making decisions.

Defensive Findings Tracking

Advanced
Command
column -s, -t jtr-audit/notes/findings.csv
Command Anatomy
  • $Base command: column
  • $Primary arguments/options: -s, -t jtr-audit/notes/findings.csv
  • $Operator goal: run this command only when it answers a clear defensive question.
Use And Risk

$ intent: Collect, validate, or document evidence in a defensive workflow.

$ risk: Review command impact before running; validate in lab first if uncertain.

$ learning focus: Advanced step: use after baseline and validation are understood.

Show sample output and interpretation notes
# review output for expected fields, errors, and warnings
# compare against a known-good baseline in your environment

$ expert reading pattern: Confirm the output matches your intended scope, identify the key fields, then validate with a second source before making decisions.

Format Detection And Baseline Audit

john --list=formats | head -n 30
john --wordlist=wordlist.txt hashes.txt
john --show hashes.txt

Rules And Session Handling

john --wordlist=wordlist.txt --rules hashes.txt --session=jtr-audit
john --restore=jtr-audit
john --status=jtr-audit

Defensive Findings Tracking

mkdir -p jtr-audit/{inputs,outputs,notes}
printf "account,result,criticality,action,owner\n" > jtr-audit/notes/findings.csv
column -s, -t jtr-audit/notes/findings.csv

defensive-use-cases

  • $Authorized internal password audits and credential hygiene reviews.
  • $IR validation after hash exposure or compromise scenarios.
  • $Training on hash formats, salts, and password policy effectiveness.
  • $Remediation planning for weak passwords on high-value accounts.

common-mistakes

  • $Skipping hash format validation and drawing conclusions from bad input.
  • $Reporting recovered passwords without prioritization by account criticality.
  • $Storing recovered credentials insecurely during or after the audit.
  • $Treating a limited test window as proof that passwords are strong.

expert-habits-for-free-self-study

This site is a free teaching resource. Use this loop to train yourself like a working defender: ask a question, collect evidence, interpret carefully, validate, document, and repeat.

  • $Start with the least invasive command that can answer your question.
  • $Write down why you ran the command before interpreting the output.
  • $Treat output as evidence, not truth, until validated against another source.
  • $Save exact commands used so another analyst can reproduce your findings.
  • $Capture 'normal' examples during calm periods for future comparison.
  • $Escalate only after you can explain what you observed and why it matters.

knowledge-check

  • ?What question is this tool best suited to answer first?
  • ?What permissions or scope approvals are needed before using it?
  • ?Which second evidence source should you pair with it for higher confidence?
  • ?What does normal output look like for your environment?

teaching-answer-guide

Show teaching hints
  • #Start from the tool’s role and the scenario you are investigating.
  • #Never rely on one tool alone for high-confidence incident decisions.
  • #Document normal output patterns during calm periods so anomalies are easier to spot.
  • #Prefer lab validation for new commands, rules, or scans before production use.

practice-plan

# Practice on test hashes and document format assumptions before running John.
# Run a baseline wordlist test, then a rules-based test, and compare results.
# Create a remediation tracker for recovered passwords by account criticality.
# Compare your John workflow with a Hashcat workflow and note shared best practices.
<- previous tool Aircrack-ng -> next tool Ncat