Cybersecurity Labs - Module 7

Hands-on CGRC, CISM, and ISSAP advanced scenarios with interactive GUIs, terminals, dashboards, and exact validation.

These Labs Cover All Cybersecurity Certifications

CompTIA Security+ CompTIA CySA+ CompTIA PenTest+ CompTIA SecurityX ISC2 CISSP ISC2 SSCP ISC2 CCSP ISC2 CGRC
ISC2 CSSLP ISC2 ISSAP ISC2 ISSEP ISC2 ISSMP ISACA CISA ISACA CISM ISACA CRISC ISACA CDPSE

ISC2 CGRC / ISACA CISM / ISC2 ISSAP Labs

Advanced governance, risk management, security architecture scenarios with detailed instructions, tips, dynamic dashboards, exact validation, and safe reset.

Lab 19: GRC Framework Implementation
CGRC
GUI
Scenario: NIST 800-53 Compliance Assessment & Documentation
You're tasked with conducting a NIST 800-53 compliance assessment for a financial services application. Identify system scope, select controls, perform gap analysis, document findings, and create a risk response plan. Follow the exact sequence.

Learning Objectives:

  • Define system boundary and categorize information types
  • Select appropriate security control baselines
  • Document compliance gaps and develop POA&M
CGRC / CISA / CRISC

Step-by-Step Instructions

  1. Step 1: System Categorization (FIPS 199)

    Before selecting security controls, you must first categorize the system using FIPS 199 (Federal Information Processing Standard). This standard requires you to assess the potential impact if the system's confidentiality, integrity, or availability were compromised. For a Payment Processing System handling financial transactions and PII, a breach could result in significant financial loss, regulatory penalties, and reputational damage.

    Why HIGH for all three:

    Confidentiality (HIGH): Payment card data and PII exposure could cause severe harm to individuals and trigger PCI-DSS/GDPR violations.
    Integrity (HIGH): Unauthorized modification of transaction data could result in significant financial losses.
    Availability (HIGH): System downtime prevents revenue generation and violates SLAs with customers.

    Action: In the System Categorization panel, select Confidentiality: HIGH, Integrity: HIGH, Availability: HIGH, then click Submit Categorization.

    Why this matters: System categorization drives the entire Risk Management Framework (RMF) process. A HIGH categorization requires the most stringent controls from NIST 800-53, ensuring adequate protection for sensitive financial data. Incorrect categorization leads to either over-spending on unnecessary controls or inadequate protection.
  2. Step 2: Select Control Baseline

    NIST SP 800-53 defines three security control baselines (LOW, MODERATE, HIGH) that correspond to FIPS 199 categorization. Since our system is categorized as HIGH, we must select the HIGH baseline which includes 372 security controls across 20 control families. The HIGH baseline includes all controls from LOW and MODERATE plus additional controls for high-impact systems.

    Control families included: Access Control (AC), Audit and Accountability (AU), Security Assessment (CA), Configuration Management (CM), Contingency Planning (CP), Identification and Authentication (IA), Incident Response (IR), Maintenance (MA), Media Protection (MP), Physical Protection (PE), Planning (PL), Personnel Security (PS), Risk Assessment (RA), System and Services Acquisition (SA), System and Communications Protection (SC), System and Information Integrity (SI), and more.

    Action: Click Select Baseline, choose NIST 800-53 Rev 5 HIGH, then click Apply Baseline.

    Why this matters: Selecting the correct baseline ensures you implement controls proportionate to your system's risk level. Using a LOW baseline for a HIGH-impact system would leave critical gaps; using HIGH for a LOW system wastes resources. The baseline becomes your compliance checklist.
  3. Step 3: Perform Gap Analysis

    Gap analysis compares your current security posture against the required control baseline. For each of the 372 HIGH baseline controls, you must determine whether the control is: (1) Fully Implemented, (2) Partially Implemented, (3) Planned, or (4) Not Implemented. Controls that are not fully implemented represent compliance gaps that must be documented and remediated.

    Common gaps in financial systems:

    AC-2 (Account Management): MFA not enforced for privileged accounts
    AU-2 (Audit Events): Incomplete logging of privileged actions
    SC-8 (Transmission Confidentiality): TLS not enforced for all connections
    IA-5 (Authenticator Management): Weak password policies
    CM-3 (Configuration Change Control): No formal change control board

    Action: Click Run Gap Analysis. The analysis will identify 12 non-compliant controls. You must review at least 3 gaps by checking the "Reviewed" checkbox to confirm you've analyzed the findings.

    Critical: Gap analysis is not a one-time activity. Per FISMA, federal systems require continuous monitoring and annual reassessment. Document your methodology and retain evidence of analysis for auditors.
  4. Step 4: Assess Risk Impact

    For each gap identified, you must assess the associated risk using NIST SP 800-30 (Guide for Conducting Risk Assessments). Risk is calculated as Likelihood × Impact. For the AC-2 MFA gap: the likelihood of credential theft is HIGH (phishing attacks are common), and the impact of compromised privileged accounts is HIGH (full system access). Therefore, the overall risk level is CRITICAL.

    NIST 800-30 Risk Matrix:

    🔴 CRITICAL (9-10): HIGH Likelihood + HIGH Impact → Immediate remediation required
    🟠 HIGH (7-8): HIGH/MODERATE combinations → Remediate within 30-60 days
    🟡 MODERATE (4-6): MODERATE combinations → Remediate within 90 days
    🟢 LOW (1-3): LOW combinations → Accept or remediate within 180 days

    Action: Click Assess Risks. For Control AC-2 (MFA gap), select Likelihood: HIGH, Impact: HIGH, Risk Level: CRITICAL.

    Why this matters: Risk assessment drives prioritization. With limited resources, you must fix CRITICAL risks first. The Authorizing Official (AO) uses risk scores to make informed ATO decisions. Without documented risk assessment, the AO cannot accept residual risk.
  5. Step 5: Develop Plan of Action & Milestones (POA&M)

    The POA&M is a mandatory FISMA document that tracks identified weaknesses and planned corrective actions. For each gap, you must document: (1) the specific weakness, (2) resources required for remediation, (3) target completion date (milestone), and (4) responsible party. The milestone date should align with the risk level—CRITICAL risks require 30-day remediation.

    POA&M for AC-2 (MFA Gap):

    Weakness: MFA not enforced for privileged/admin accounts
    Resources: Azure AD Premium P2 licenses, 40 hours implementation time
    Milestone: 30 days (CRITICAL risk requires immediate action)
    Owner: IT Security Manager (has authority over identity systems)

    Action: Click Create POA&M. Select the appropriate options from each dropdown that match the AC-2 MFA weakness.

    Critical: POA&M is a legally binding commitment. Federal agencies must report POA&M status quarterly. Missed milestones trigger escalation to agency CIO. Include realistic timelines and ensure the responsible party has agreed to ownership.
  6. Step 6: Document Compliance Status

    The System Security Plan (SSP) is the primary document describing your system's security posture. It consolidates all information from the RMF process: system description, categorization, control implementation status, risk assessment results, and POA&M. The SSP is required for ATO and must be updated whenever significant changes occur.

    SSP Sections Generated:

    System Description: Architecture, data flows, boundaries, interconnections
    Control Implementation: How each of 372 controls is implemented or planned
    POA&M: Documented weaknesses and remediation plans
    Risk Assessment: Threat analysis, vulnerability assessment, risk scores

    Action: Click Generate SSP. The system will compile all documentation into a comprehensive SSP document.

    Why this matters: The SSP is a living document—not a one-time deliverable. Per NIST 800-37, you must update the SSP after significant changes, security incidents, or annual reviews. Auditors will compare your SSP to actual system configuration to verify accuracy.
  7. Step 7: Submit for Authority to Operate (ATO)

    The Authorization to Operate (ATO) is the formal decision by an Authorizing Official (AO) to accept the residual risk and permit the system to operate. The AO must be a senior official with authority to accept risk on behalf of the organization. For most organizations, this is the CISO. The ATO package includes the SSP, Security Assessment Report (SAR), and POA&M.

    Why CISO as Authorizing Official:

    • CISO has organizational authority to accept security risk
    • CISO understands technical risks and business impact
    • CISO is accountable to the Board for security posture
    • CISO can enforce remediation timelines across departments

    ATO Decisions: Approve (full authorization), Deny (system cannot operate), or Conditional (operate with POA&M constraints). ATO validity is typically 1-3 years with continuous monitoring.

    Action: Click Submit ATO Package. Select Authorizing Official: CISO, then click Submit.

    Critical: Operating without ATO (or with expired ATO) is a compliance violation. The AO personally assumes liability for accepted risk. Ensure all documentation is complete and accurate before submission—the AO will review and may request revisions.
  8. Step 8: Review & Download ATO Package Report

    After ATO submission, you must review the complete authorization package to understand what documentation is required for federal system authorization. The ATO package is a comprehensive collection of security documentation required by FISMA and NIST RMF that enables authorizing officials to make risk-based decisions.

    ATO Package Components:

    System Security Plan (SSP): Describes system boundary, environment, and implemented security controls
    Security Assessment Report (SAR): Documents independent testing results of security controls
    Plan of Action & Milestones (POA&M): Tracks identified weaknesses and planned remediation
    Risk Assessment Report (RAR): Provides detailed analysis of threats, vulnerabilities, and risks
    Authorization Decision: Formal acceptance of residual risk by the Authorizing Official

    Action: Click View ATO Report in the Documentation panel to review the complete 6-page authorization package. Then click Download PDF to save a copy of the official ATO documentation.

    Why this matters: Understanding what an ATO package contains is essential for anyone working in federal cybersecurity or GRC roles. The ATO package demonstrates due diligence, provides audit evidence, and serves as the official record of security authorization. You will be expected to create, review, and maintain these documents throughout your career.
  9. Step 9: Answer ATO Report Knowledge Check Questions

    📄 Download and carefully study the ATO report before answering these questions. These certification-level questions test your understanding of the ATO package documentation you just created.

    Question 1 of 3: According to the ATO report's Executive Summary, what is the overall System Categorization for the Payment Processing System per FIPS 199?

    Question 2 of 3: The Security Assessment Report (SAR) section identifies how many CRITICAL risk findings that require immediate remediation within 30 days?

    Question 3 of 3: According to the POA&M section, what is the total estimated cost for remediating all identified security control deficiencies?

    📝 Note: All three questions must be answered correctly to complete this lab. Review the downloaded ATO PDF report carefully if you're unsure about any answers.
GRC Compliance Console

System Categorization

Control Baseline

Complete system categorization first.

Gap Analysis

Select baseline to run gap analysis.

Risk Assessment

Complete gap analysis to assess risks.

POA&M Status

Assess risks to create POA&M.

Documentation

Complete POA&M to generate documentation.

Progress: 0/8
Score: 0/100
Lab 20: Security Program Development
CISM
GUI
Scenario: Establish Information Security Program
As the Information Security Manager, you must establish a comprehensive security program aligned with business objectives. Define governance structure, develop policies, implement risk management, create incident response plan, and establish metrics. Execute in sequence.

Learning Objectives:

  • Establish information security governance framework
  • Develop risk management program with KRIs
  • Create incident response and business continuity plans
CISM / CISSP / CISA

GUI Step-by-Step Instructions

  1. Step 1: Define Governance Structure

    🎯 Goal: Establish formal information security governance that defines decision-making authority, accountability, and reporting relationships across the organization.

    💡 Why This Matters: Effective security governance ensures that security initiatives are aligned with business objectives and have executive support. Without proper governance, security becomes an afterthought rather than a strategic function. The governance structure determines who makes security decisions, who is accountable for security failures, and how security priorities are set and funded.

    Action: Click Define Governance. Set Steering Committee: Board-level oversight, CISO Reports To: CIO, Security Council Frequency: Monthly. Click Establish Governance.

    Why These Specific Settings: Board-level steering committee ensures security has executive visibility and budget authority. The CISO reporting to the CIO (a C-level executive) provides independence and direct access to leadership—reporting to lower-level IT management creates conflicts of interest. Monthly security council meetings ensure regular review of security posture, emerging threats, and ongoing initiatives without being so frequent as to cause meeting fatigue.
  2. Step 2: Develop Security Policies

    🎯 Goal: Create mandatory security policies that establish rules, expectations, and consequences for the entire organization.

    💡 Why This Matters: Policies are the foundation of any security program—they translate security strategy into enforceable rules. Without written policies, there are no standards to enforce, no basis for disciplinary action, and no way to demonstrate due diligence to auditors or regulators. Policies also set employee expectations and provide legal protection for the organization.

    Action: Click Create Policies. Select policy templates: Acceptable Use Policy, Data Classification Policy, Password Policy, Incident Response Policy. Set Review Cycle: Annual. Click Approve Policies.

    Why These Policies: Acceptable Use defines permitted use of company systems (critical for legal protection). Data Classification establishes handling requirements based on sensitivity (PII, financial, public). Password Policy sets authentication standards (length, complexity, rotation). Incident Response Policy defines roles and procedures during security events. Annual review ensures policies remain current with evolving threats and regulations.
  3. Step 3: Establish Risk Management Framework

    🎯 Goal: Implement a structured methodology for identifying, assessing, and treating information security risks aligned with organizational risk appetite.

    💡 Why This Matters: Risk management is the core of information security—you cannot protect everything equally, so you must prioritize based on risk. A formal risk framework ensures consistent assessment methodology, documented risk decisions, and alignment between security investments and actual threats. Without risk management, you either over-invest in low-risk areas or under-protect critical assets.

    Action: Click Setup Risk Framework. Select Methodology: ISO 31000, Assessment Frequency: Quarterly, Risk Appetite: LOW (financial services). Click Initialize Framework.

    Why These Settings: ISO 31000 is an internationally recognized risk management standard that provides a structured approach compatible with other frameworks. Quarterly assessments balance thoroughness with practicality—more frequent would be burdensome, less frequent would miss emerging risks. LOW risk appetite is appropriate for financial services due to regulatory requirements (SOX, PCI-DSS, GLBA) and the severe consequences of data breaches in this sector.
  4. Step 4: Define Key Risk Indicators (KRIs)

    🎯 Goal: Establish measurable leading indicators that provide early warning of increasing risk exposure before incidents occur.

    💡 Why This Matters: KRIs are like early warning systems—they tell you when risk is increasing so you can act before an incident happens. Unlike KPIs which measure past performance, KRIs predict future problems. Effective KRIs enable proactive risk management rather than reactive firefighting, and they provide objective data for risk discussions with executives.

    Action: Click Add KRIs. Enter: KRI 1: "Unpatched Critical Vulnerabilities > 30 days", Threshold: 5, KRI 2: "Failed Login Attempts > 10/hour", Threshold: 100, KRI 3: "Privileged Access Reviews Overdue", Threshold: 10%. Click Save KRIs.

    Why These KRIs: Unpatched vulnerabilities indicate attack surface exposure—more than 5 critical vulnerabilities older than 30 days signals remediation process failure. Excessive failed logins (100/hour) may indicate brute force attacks or credential stuffing. Overdue privileged access reviews (>10%) mean users may retain access they no longer need, violating least privilege. Each threshold triggers escalation when exceeded.
  5. Step 5: Develop Incident Response Plan

    🎯 Goal: Create a comprehensive incident response plan that defines roles, procedures, and timelines for detecting, containing, and recovering from security incidents.

    💡 Why This Matters: Security incidents will happen—the question is how prepared you are to respond. A well-documented IRP reduces response time, minimizes damage, ensures consistent handling, and demonstrates due diligence to regulators. Without an IRP, incident response becomes chaotic, key steps are missed, and the organization suffers greater harm.

    Action: Click Create IRP. Fill: CSIRT Lead: "Security Operations Manager", Escalation Path: "SOC ? Security Manager ? CISO ? CIO ? Board", SLA: Critical (1 hour), High (4 hours), Medium (24 hours). Click Approve IRP.

    Critical Requirements: The Security Operations Manager leads the CSIRT because they have 24/7 visibility and technical expertise. The escalation path ensures appropriate stakeholders are informed based on severity. SLAs ensure timely response—critical incidents (ransomware, active breach) require 1-hour initial response, high severity (data exposure) within 4 hours, medium within 24 hours. Test the IRP annually through tabletop exercises simulating real scenarios.
  6. Step 6: Establish Business Continuity Plan

    🎯 Goal: Develop business continuity and disaster recovery plans that ensure critical business functions can continue during and after disruptions.

    💡 Why This Matters: Business continuity planning protects the organization from existential threats—ransomware, natural disasters, or infrastructure failures that could halt operations. The BCP ensures you can recover critical systems within acceptable timeframes and with acceptable data loss. Without BCP, an extended outage could result in revenue loss, regulatory penalties, and reputational damage.

    Action: Click Create BCP. Set: RTO (Recovery Time Objective): 4 hours for critical systems, RPO (Recovery Point Objective): 1 hour data loss max, Backup Frequency: Hourly incremental, Daily full. Click Finalize BCP.

    Understanding RTO/RPO: RTO (Recovery Time Objective) of 4 hours means critical systems must be restored within 4 hours of an outage—this drives infrastructure investments like hot standby systems. RPO (Recovery Point Objective) of 1 hour means you can lose at most 1 hour of data—this drives backup frequency (hourly incrementals). These aggressive targets are typical for financial services where downtime directly impacts revenue and regulatory compliance.
  7. Step 7: Define Program Metrics & Dashboard

    🎯 Goal: Establish metrics that measure security program effectiveness and present them to executive leadership through an actionable dashboard.

    💡 Why This Matters: What gets measured gets managed—without metrics, you cannot demonstrate program value, justify budget requests, or identify areas needing improvement. Executive dashboards translate technical security data into business terms that leadership understands, enabling informed decision-making about security investments and priorities.

    Action: Click Setup Dashboard. Select metrics: Risk Score Trend, Vulnerability Remediation Rate, Security Awareness Training Completion, Incident Response Time, Audit Findings Status. Set Report Frequency: Monthly. Click Generate Dashboard.

    Why These Metrics: Risk Score Trend shows overall security posture over time. Vulnerability Remediation Rate measures patch management effectiveness. Training Completion indicates human risk reduction. Incident Response Time demonstrates operational capability. Audit Findings Status tracks compliance gaps. Monthly reporting aligns with board/steering committee meeting cadence. Keep dashboards visual and business-focused—executives need trends and summaries, not technical details.
Security Program Management Console

Governance Structure

Not configured

Policies

Define governance structure first.

Risk Framework

Develop policies first.

Key Risk Indicators

Establish risk framework first.

Incident Response

Define KRIs first.

Business Continuity

Create incident response plan first.

Program Dashboard

Finalize BCP to generate dashboard.

Progress: 0/7
Score: 0/100
Lab 21: Zero Trust Architecture (Linux/NFTables)
ISSAP
Terminal+GUI
Scenario: Design & Implement Zero Trust Network Architecture
Design a Zero Trust Architecture for a multi-cloud environment following NIST SP 800-207. Implement micro-segmentation, identity-based access, continuous verification, and encrypted communications. Configure network, IAM policies, and monitoring components. Commands must be exact and in order.

Learning Objectives:

  • Design Zero Trust network architecture with micro-segmentation
  • Implement identity and access management controls
  • Configure continuous monitoring and verification
ISSAP / CISSP / CCSP

Architecture Design Instructions

  1. Step 1: Verify nftables Installation

    🎯 Goal: Confirm that nftables (the modern Linux packet filtering framework) is installed and ready for use.

    💡 Why This Matters: Before configuring any firewall rules, you must verify that the required tools are present on the system. The nftables framework replaced the legacy iptables and provides a more consistent syntax for packet filtering. Checking the version confirms installation and ensures compatibility with the commands used in this lab.

    Type exactly:

    nft --version
    Command Breakdown:
    nft — The nftables command-line utility for managing packet filtering rules
    --version — Flag that prints version information instead of executing any rule changes

    Expected Output: Version information will be displayed (e.g., "nftables v1.0.9"). If you see an error like "command not found," nftables is not installed.
  2. Step 2: Create the Zero Trust Table

    🎯 Goal: Create a dedicated nftables table to contain all Zero Trust firewall rules.

    💡 Why This Matters: In nftables, all firewall rules are organized into tables. A table is a container that holds chains (which contain rules). By creating a separate table named "zt" for Zero Trust rules, you keep them isolated from any other firewall configurations on the system, making auditing, troubleshooting, and maintenance significantly easier.

    Type exactly:

    nft add table inet zt
    Command Breakdown:
    nft — The nftables command-line utility for managing packet filtering rules
    add table — Subcommand to create a new table in the nftables ruleset
    inet — Address family that handles both IPv4 and IPv6 traffic simultaneously (dual-stack)
    zt — The name we assign to this table (short for Zero Trust)

    Expected Output: No output is displayed. This command completes silently when successful. An error message would indicate a problem (e.g., table already exists).
  3. Step 3: Create Input Chain with Default-Deny Policy

    🎯 Goal: Create an input chain that blocks all inbound traffic by default, implementing the Zero Trust "deny by default" principle.

    💡 Why This Matters: Zero Trust Architecture fundamentally rejects the traditional "trust but verify" model. Instead, it implements "never trust, always verify." By setting the default policy to DROP, every inbound packet is blocked unless an explicit rule permits it. This is the opposite of legacy firewalls that often allow internal traffic freely.

    Type exactly:

    nft add chain inet zt input { type filter hook input priority 0; policy drop; }
    Command Breakdown:
    add chain — Creates a new chain within the specified table
    inet zt input — Chain named "input" in the inet zt table
    type filter — This chain will filter packets (as opposed to NAT or route)
    hook input — Attaches to the kernel's input hook (incoming packets destined for local processes)
    priority 0 — Processing priority (0 is standard; lower numbers = higher priority)
    policy drop — Default action for packets not matching any rule: silently discard

    Expected Output: No output is displayed. This command completes silently when successful.

    Critical Zero Trust Principle: Default-deny is the foundation of Zero Trust. Unlike traditional perimeter security that allows internal traffic, Zero Trust treats ALL traffic as untrusted until verified.
  4. Step 4: Create Forward Chain with Default-Deny Policy

    🎯 Goal: Create a forward chain that blocks all traffic passing through the system, preventing unauthorized lateral movement between network segments.

    💡 Why This Matters: The forward chain controls traffic that passes through the system when it acts as a router or gateway between networks. In a Zero Trust architecture, blocking forwarded traffic by default is essential for micro-segmentation. If an attacker compromises one segment (e.g., DMZ web servers), they cannot automatically traverse to other segments (e.g., Database servers) without explicit authorization.

    Type exactly:

    nft add chain inet zt forward { type filter hook forward priority 0; policy drop; }
    Command Breakdown:
    add chain inet zt forward — Creates a chain named "forward" in our Zero Trust table
    hook forward — Attaches to the kernel's forward hook (packets being routed through, not destined for local host)
    policy drop — Block all forwarded traffic by default, enforcing micro-segmentation

    Expected Output: No output is displayed. This command completes silently when successful.

    ??? Micro-segmentation Benefit: This prevents lateral movement—a critical Zero Trust control. Each network segment becomes an isolated security zone.
  5. Step 5: Create Output Chain (Allow Outbound)

    🎯 Goal: Create an output chain that allows outbound traffic from local processes while enabling monitoring and logging.

    💡 Why This Matters: The output chain controls traffic originating from processes running on this host. In this Zero Trust design, we allow outbound traffic by default (policy accept) because the host needs to initiate connections for updates, API calls, and legitimate business functions. However, all outbound traffic will be logged for monitoring and anomaly detection.

    Type exactly:

    nft add chain inet zt output { type filter hook output priority 0; policy accept; }
    Command Breakdown:
    add chain inet zt output — Creates a chain named "output" in our Zero Trust table
    hook output — Attaches to the kernel's output hook (outgoing packets from local processes)
    policy accept — Allow outbound traffic by default (to be logged and monitored)

    Expected Output: No output is displayed. This command completes silently when successful.

    Production Note: While outbound is allowed here for simplicity, production Zero Trust deployments often implement egress filtering to restrict outbound traffic to known-good destinations only, preventing data exfiltration and C2 callbacks.
  6. Step 6: Allow Established/Related Connections

    🎯 Goal: Add a stateful rule that allows return traffic for connections initiated by this host, enabling legitimate responses while maintaining security.

    💡 Why This Matters: Without this rule, the default-deny input policy would block ALL inbound traffic, including responses to connections you initiated (like DNS queries or HTTPS requests). Stateful packet inspection tracks connection states, allowing the firewall to distinguish between legitimate response traffic and unsolicited connection attempts.

    Type exactly:

    nft add rule inet zt input ct state established,related accept
    Command Breakdown:
    add rule — Adds a rule to an existing chain
    inet zt input — Target the input chain in our Zero Trust table
    ct state — Connection tracking state matcher (stateful inspection)
    established — Packets belonging to an already-established connection
    related — Packets related to an existing connection (e.g., ICMP errors, FTP data channels)
    accept — Action: allow these packets through

    Expected Output: No output is displayed. This command completes silently when successful.

    Understanding Stateful Inspection: This rule is essential for any functional firewall. It permits response traffic for YOUR outbound connections while still blocking unsolicited inbound attempts.
  7. Step 7: Create IP Set for Network Segments

    🎯 Goal: Create a named set to store IP address ranges representing our micro-segmented network zones.

    💡 Why This Matters: Named sets in nftables allow you to group IP addresses or networks for efficient rule matching. Instead of writing separate rules for each network segment, you can reference the set in rules and manage segment membership centrally. This is essential for scalable micro-segmentation—when you add or remove network segments, you only update the set, not every rule.

    Type exactly:

    nft add set inet zt segments { type ipv4_addr; flags interval; }
    Command Breakdown:
    add set — Creates a named set within the table
    inet zt segments — Set named "segments" in our Zero Trust table
    type ipv4_addr — Set contains IPv4 addresses
    flags interval — Allows CIDR notation ranges (e.g., 10.0.1.0/24) instead of individual IPs only

    Expected Output: No output is displayed. This command completes silently when successful.

    Scalability Benefit: Sets make firewall rules more readable and maintainable. In large environments, you might have dozens of segments—managing them centrally in a set is far easier than embedding IPs in individual rules.
  8. Step 8: Populate Segments (DMZ/App/DB)

    🎯 Goal: Populate the segments set with three CIDR ranges representing our isolated network zones: DMZ, Application tier, and Database tier.

    💡 Why This Matters: Each CIDR range represents a logically isolated network segment. The DMZ (10.0.1.0/24) hosts public-facing services like web servers, the Application tier (10.0.2.0/24) runs business logic and app servers, and the Database tier (10.0.3.0/24) stores sensitive data. By defining these segments explicitly, we can create precise rules that control exactly which segments can communicate with which.

    Type exactly:

    nft add element inet zt segments { ********/24, ********/24, ********/24 }
    Command Breakdown:
    add element — Adds entries to an existing set
    inet zt segments — Target the "segments" set in our Zero Trust table
    10.0.1.0/24 — DMZ segment (256 IPs for public-facing services)
    10.0.2.0/24 — Application segment (256 IPs for internal app servers)
    10.0.3.0/24 — Database segment (256 IPs for backend data stores)

    Expected Output: No output is displayed. This command completes silently when successful.

    Micro-segmentation Principle: Each segment is isolated by default due to our forward chain's DROP policy. Traffic between segments requires explicit rules, limiting the blast radius if one segment is compromised.
  9. Step 9: Verify Ruleset Configuration

    🎯 Goal: Display and verify the complete firewall ruleset to confirm all tables, chains, sets, and rules are configured correctly.

    💡 Why This Matters: Before moving to policy configuration, you must verify that all nftables components are in place. This command displays the entire ruleset in a clean, readable format. If anything is missing or misconfigured, you'll see it here and can correct it before proceeding.

    Type exactly:

    nft -s list ruleset
    Command Breakdown:
    nft — The nftables command-line utility
    -s — Stateless output (omits runtime counters for cleaner display)
    list ruleset — Shows all tables, chains, sets, and rules in the system

    Expected Output: You should see the complete "zt" table with input/forward/output chains, their policies (drop/drop/accept), the stateful rule in the input chain, and the "segments" set populated with your three CIDR ranges.

    Verification Best Practice: Always verify your firewall configuration after making changes. Compare the output against your intended configuration to catch any typos or missing rules before they cause connectivity issues.
  10. Step 10: Implement Least Privilege Policy (GUI)

    Zero Trust requires explicit authorization for every access request. Now we create a policy that allows only specific traffic: DMZ segment can communicate with App segment on TCP port 443 (HTTPS) only. All other traffic between segments remains blocked.

    Why this configuration: The DMZ hosts public-facing web servers that need to communicate with backend application servers. By restricting to port 443 only, we ensure only HTTPS traffic is permitted—no SSH, no database ports, no other protocols.

    Action: In the Policy Engine panel, configure:

    • From Segment: Select DMZ
    • To Segment: Select App
    • Protocol/Port: Select 443/TCP

    Then click Apply Policy.

    Least Privilege Principle: Grant only the minimum access necessary to perform a function. DMZ servers only need HTTPS to App tier—nothing more. This limits attack surface and potential lateral movement.
  11. Step 11: Enable Telemetry & Analytics (GUI)

    Zero Trust requires comprehensive logging of all access attempts for continuous verification and threat detection. Enable all telemetry options and set retention to 1 year to meet compliance requirements (SOC 2, HIPAA, PCI-DSS typically require 1 year minimum).

    Why 1 year retention: Security investigations often require historical data. Ransomware attacks may not be detected for months. Compliance audits need evidence of controls over time. 1 year provides adequate forensic runway.

    Action: In the Telemetry & Analytics panel:

    • • Verify all checkboxes are enabled: Log All Access Attempts, Session Recording, UEBA, SIEM Integration
    • • Select Log Retention: 1 year

    Then click Activate Telemetry.

    Continuous Monitoring: Zero Trust never assumes trust is permanent. Continuous logging enables detection of compromised credentials, insider threats, and anomalous behavior patterns that point-in-time authentication cannot catch.
  12. Step 12: Validate Architecture Against NIST SP 800-207 (GUI)

    NIST Special Publication 800-207 defines the Zero Trust Architecture standard. Run an automated compliance check to verify your implementation meets all 7 ZTA tenets: verify explicitly, least privilege, assume breach, encrypt all traffic, continuous monitoring, dynamic policy, and comprehensive telemetry.

    Why validation matters: Zero Trust is not a product but an architecture. Without validation, you may have gaps that create false confidence. Automated checks ensure all components are properly configured before production deployment.

    Action: In the Architecture Validation panel, click Run ZT Compliance Check. Review results to confirm all 7 tenets pass. Then click Generate Architecture Report to document compliance.

    NIST SP 800-207 Tenets: (1) All data sources and computing services are resources. (2) All communication is secured regardless of location. (3) Access is granted per-session. (4) Access is determined by dynamic policy. (5) Enterprise monitors and measures integrity. (6) Authentication/authorization is dynamic and strictly enforced. (7) Enterprise collects info about assets, network, and communications.
Zero Trust Architecture Console Type commands EXACTLY as shown. Order matters.
architect@zt-console:~$
Zero Trust Policy & Monitoring GUI

Policy Engine

Complete terminal configuration steps 1-9 first.

Telemetry & Analytics

Configure policies first.

Architecture Validation

Enable telemetry first.

Progress: 0/12
Score: 0/100