Disclaimer
The information contained in this article is provided for educational purposes only. The authors, Border Cyber Group, BorderElliptic, and Jonathan Brown make no warranties, express or implied, regarding the accuracy, completeness, or fitness for any particular purpose of the material presented here. Implementation of any security configuration described in this article is undertaken entirely at the reader's own risk.
Kernel hardening, encrypted storage configuration, Secure Boot key management, TPM2 PCR sealing, and related procedures described herein have the potential to render a system unbootable, inaccessible, or permanently locked if implemented incorrectly, incompletely, or without adequate preparation. The authors accept no liability whatsoever for data loss, system failure, hardware damage, loss of access to encrypted volumes, financial loss, or any other consequence — direct, indirect, incidental, or consequential — arising from the use or misuse of the information presented.
This article assumes the reader possesses sufficient Linux systems administration experience to understand what a given command does before executing it. If you are copying commands from this article into a terminal without understanding their purpose and effect, stop. The recovery procedures in Section 8 exist because experienced administrators make mistakes in this domain. They are not a safety net for the incautious.
Nothing in this article constitutes professional security consulting advice. Every system, threat model, and operational environment is different. The authors strongly recommend testing all procedures in a non-production environment before applying them to any system containing data you cannot afford to lose.
By implementing any procedure described in this article, the reader acknowledges that they have read and understood this disclaimer in its entirety, that they accept sole responsibility for the outcome, and that the authors bear no liability for the consequences of their actions.
You have been warned. Sincerely and without ambiguity — Be warned.
Introduction
The security conversation for Linux users almost always starts at the software layer. Firewalls. SELinux policies. Encrypted volumes. Fail2ban watching your SSH logs. That's the right conversation to have — but it's the second one. The first happens in silicon, in firmware, and in boot sequence decisions most users never think about until something goes wrong.
Here's the uncomfortable truth: if an attacker can modify your bootloader, your encrypted volume is theater. If they can load an unsigned kernel module, your carefully tuned AppArmor profiles are irrelevant. If your IOMMU isn't enforcing DMA isolation, a malicious PCIe device or compromised driver can read arbitrary memory regardless of what your firewall policy says. Software security built on an unexamined hardware foundation is a house with deadbolts on the doors and open windows.
The good news is that consumer hardware in 2024-2025 ships with a remarkable set of security primitives that almost nobody uses. TPM2 chips sit idle on motherboards that never perform a measured boot. Intel CET — a hardware control-flow integrity mechanism that can stop entire classes of exploits cold — goes unconfigured on chips that fully support it. IOMMU units initialize in lazy mode, trading a narrow but real security window for a performance gain most users would never notice.
This article is a practical map of those primitives: what they are, how they vary across Intel and AMD platforms, how they fit together into a coherent architecture, and what implementation actually looks like on a self-built Linux machine. The target audience is the experienced Linux user building their own hardware — the online gamer with significant personal data and credentials on their rig, the penetration tester whose machine is itself a high-value target containing client credentials and sensitive tooling, the privacy-focused builder who understands that "nobody's targeting me" is not a threat model.
We'll be honest about the ceilings. Consumer hardware without server-grade memory encryption means a kernel-level attacker can still read process memory — that's a real limitation and we won't pretend otherwise. But the architecture described here means reaching that level requires a chain of escalations, each of which breaks on multiple independent controls. Automated attacks — the vast majority of real-world threats — don't make it past the first or second rung of that ladder. The goal is to make your machine the one that doesn't fall to the script, the drive-by, or the opportunist.
Start with what you have. Use what the hardware already offers. The gap between a default Linux install and a genuinely hardened one is smaller than you think — and most of it is configuration, not custom kernel patches or exotic tooling.
Section 1: The Threat Model — What Are We Actually Defending Against?
Most security hardening guides skip this part. They open with a command block, walk you through enabling a dozen kernel parameters, and leave you with a system that's technically more locked down but with no coherent understanding of why any of it matters or which parts are actually load-bearing for your specific situation. That produces cargo-cult security — configurations copied from a blog post, half-understood, never tested, abandoned when they break something.
Threat modeling is not an academic exercise. It's the difference between spending a weekend configuring IMA/EVM enforcement because your use case genuinely requires runtime binary integrity verification, and spending that same weekend on something that moves the needle for the threats you actually face. Both are valid outcomes — but you can't make that call without the model.
So before we touch hardware or kernel parameters, let's be precise about what we're defending against, and what we're not.
1.1 The Attacker Taxonomy
For a personal Linux workstation with internet exposure, the realistic attacker population breaks into two broad categories.
Remote attackers are by far the most common and represent the overwhelming majority of real-world compromises. This category includes automated scanners probing for exposed services with known vulnerabilities, opportunistic exploitation of unpatched daemons, drive-by browser and renderer exploits that land unprivileged code execution on your machine, and supply chain compromises — malicious packages, tampered package mirrors, or backdoored upstream dependencies. The defining characteristic of remote attackers is that they are mostly automated, largely opportunistic, and operating at scale. They are not specifically targeting you. They are casting a wide net and your machine is one of millions being probed.
The exception worth noting: pen testers are a special case. If you are a penetration tester, your machine is a genuinely attractive specific target. It contains client credentials, network maps, captured traffic, and attack tooling that could be repurposed against your clients. A sophisticated adversary who identifies you as a pen tester has strong motivation to compromise your machine specifically. This elevates your threat model meaningfully above the average user.
Physical attackers require proximity and are therefore rarer, but the consequences of a successful physical attack are severe and often undetectable. The classic scenarios are the evil maid attack — brief unattended physical access used to modify your bootloader, plant a hardware keylogger, or flash modified firmware — and direct disk extraction, where an attacker removes your storage and reads it in another machine. Cold boot attacks, where RAM contents are extracted from a recently powered machine by freezing the DIMMs to slow memory decay, are technically feasible but require specialized equipment and close timing; they are in the threat model for nation-state adversaries and should be considered by pen testers working on sensitive engagements.
For most readers, physical attacks are relevant primarily in the laptop context — a machine that travels to client sites, conference networks, and untrusted environments. For a desktop that lives in your home, the physical threat model is real but considerably lower than the remote one.
1.2 The Escalation Ladder
Understanding how attacks actually progress is more useful than a list of attack categories. Think of a successful compromise as an attacker climbing a ladder, where each rung is a distinct privilege level and each step upward requires a separate exploit or technique:
[Network] → [Unprivileged process] → [Privileged process/root]
→ [Kernel] → [Firmware/UEFI] → [Hardware]An attacker who achieves unprivileged remote code execution — say, through a browser exploit or a vulnerable network service — is standing on the second rung. They can execute code as your user but nothing more. To cause serious damage, they need to climb. Getting from an unprivileged process to root requires a local privilege escalation vulnerability. Getting from root to kernel requires a kernel exploit, which is significantly harder. Getting from kernel to firmware is harder still and generally requires either a physical presence or a firmware vulnerability that can be exploited from software.
This framing matters because it reveals where the leverage is. The goal of hardware and kernel-level security is not to make the bottom rungs unreachable — that's the job of your firewall, your browser sandbox, your patch discipline, and your application security posture. The goal is to ensure that reaching any given rung does not automatically hand the attacker the next one. A compromised browser process should find no path to kernel memory. A kernel-level attacker should find storage keys that the hardware refuses to release. Root should be a dead end, not a launchpad.
This is a fundamentally different framing than "prevent compromise." It's "limit the consequences of compromise." The distinction matters enormously for how you prioritize your hardening work.
1.3 Use-Case Specific Threat Considerations
For online gamers: The primary threat vectors are credential theft (account credentials, payment information, personal data stored on the machine), kernel-level anti-cheat software that itself represents a large privileged attack surface, and the fact that gaming machines tend to run a lot of third-party software from sources with varying security hygiene. The threat model is predominantly remote and automated. Physical attacks are rarely relevant. The highest-value controls are those that limit what a compromised unprivileged process can reach — storage encryption, process isolation, and network namespace separation.
For penetration testers: The threat model is elevated across the board. You operate on adversarial networks where the network infrastructure itself may be hostile. Your machine contains high-value data that motivates targeted attacks specifically against you. You routinely run tools that process untrusted input — packet captures, malicious files, hostile web content — which means your attack surface from within your own tooling is substantial. You need the full stack: secure boot, measured boot, storage sealed to TPM state, kernel lockdown, and aggressive network namespace separation for different engagement contexts. Your machine should be harder to own than your clients' networks. That's the bar.
For privacy-focused builders: The threat model is primarily remote and centers on data exfiltration rather than credential theft. The specific concern is that a compromised system could silently exfiltrate sensitive data — documents, communications, browsing history — without obvious indicators. IMA integrity enforcement, process isolation, and encrypted storage with TPM-sealed keys are particularly high-value here because they limit what a remote attacker can access even after achieving code execution.
1.4 The Honest Ceiling
Hardware and kernel security has real limits on consumer silicon and being clear about them upfront prevents wasted effort and false confidence.
No hardware memory encryption on most consumer hardware. Without Intel TME or AMD SME active, a kernel-level attacker can read arbitrary process memory. Encryption keys, session tokens, and sensitive data that exist in RAM are readable to someone who has achieved kernel compromise. This is a real limitation. The correct response is not despair but architectural compensation: minimize what lives in process memory, ensure kernel compromise is the required escalation level, and make reaching that level as hard as possible.
Firmware attacks are largely outside this threat model. A sufficiently sophisticated adversary with physical access and enough time can potentially implant persistent firmware-level malware. Measured boot and TPM attestation provide detection capability but not prevention against a determined firmware attacker with physical access. For personal use on a desktop in a controlled environment, this is not a realistic threat. For high-value targets operating in hostile physical environments, it's worth knowing this ceiling exists.
User behavior bypasses everything. A hardened kernel and a sealed LUKS volume offer no protection against a user who pastes a base64-decoded script from a forum into their terminal, reuses a compromised password, or runs an untrusted binary with elevated privileges. The controls described in this article are not a substitute for security hygiene — they're a complement to it. The most sophisticated hardware root of trust ever built cannot survive a determined user defeating it from the inside.
With the threat model established, we can now be precise about which hardware features address which threats — and build a coherent architecture rather than a pile of unrelated configurations.
Section 2: The Hardware Landscape — What Your Silicon Actually Offers
Not all consumer hardware is created equal when it comes to security primitives. The gap between what a modern Intel Core Ultra and a four-year-old Ryzen can offer is significant, and the gap between what any consumer chip offers versus what you actually have configured and active is usually larger still. Before planning an architecture, you need an accurate inventory of what your specific hardware can do. This section maps the relevant features by platform so you know what you're working with — and what you're not.
At the end of this section there's a diagnostic command block you can run as root to generate a complete hardware security capability snapshot for your own machine. Use it. The rest of this article will make considerably more sense against your specific output than against generalizations.
2.1 The TPM2 — Your Hardware Root of Trust
The Trusted Platform Module is the keystone of almost everything described in this article. If you take one thing from this section, let it be this: if your machine has a TPM2 and you're not using it, you are leaving the single most powerful consumer security primitive entirely on the table.
A TPM2 is a dedicated cryptographic microcontroller, either as a discrete chip soldered to a header on your motherboard or as firmware running in a protected execution environment on your CPU (called an fTPM — firmware TPM). It does several things that nothing else in your system can replicate:
Platform Configuration Registers (PCRs) are the TPM's measurement log. During boot, each stage of the boot process — firmware, bootloader, kernel, initramfs — hashes itself and extends that hash into a specific PCR. The PCR values represent a cryptographic summary of exactly what software ran during boot. You cannot fake or retroactively modify PCR values without physical access to the TPM hardware itself.
Key sealing allows the TPM to encrypt a secret (such as your LUKS volume key) against a specific PCR state. The TPM will only release the sealed secret if the current PCR values match the values recorded when the secret was sealed. The practical consequence: if anyone modifies your bootloader, swaps your kernel, or alters your initramfs between boots, the PCR values change, the TPM refuses to unseal the key, and your encrypted storage remains locked — even if the attacker knows your passphrase.
Attestation allows the TPM to cryptographically prove to an external party that a machine is running specific, unmodified software. This is more relevant to enterprise and cloud contexts than personal use, but the underlying mechanism powers everything else.
Most consumer motherboards shipped in the last four years include either a discrete TPM2 header (where you can install an add-on module) or a firmware TPM built into the chipset. On Intel platforms the fTPM runs in the PCH; on AMD it runs in the Platform Security Processor (PSP). Discrete TPMs are marginally more attack-resistant — fTPMs have had firmware vulnerabilities, notably the AMD PSP fTPM vulnerability disclosed in 2023 — but both are usable for the threat model described here. Check your motherboard specifications and your UEFI firmware settings to confirm TPM2 is enabled. On Linux, /dev/tpm0 and /dev/tpmrm0 should be present if it's active.
2.2 Intel Platforms
VT-d and IOMMU
Intel's Virtualization Technology for Directed I/O (VT-d) is the mechanism that enables IOMMU — Input-Output Memory Management Unit — protection on Intel platforms. The IOMMU enforces that PCIe devices can only perform DMA (Direct Memory Access) into memory regions explicitly allocated to them. Without it, a compromised device driver, a malicious PCIe peripheral, or a rogue Thunderbolt device can potentially read or write arbitrary memory regardless of what the CPU thinks is isolated.
VT-d is present on most Intel Core and Core Ultra platforms but must be explicitly enabled in UEFI firmware settings. It then requires the intel_iommu=on kernel parameter to activate. Confirming it's actually working requires checking dmesg for DMAR initialization messages and verifying that iommu: Default domain type: Translated appears — not passthrough. If IOMMU is present but in passthrough mode, it is providing no protection.
Intel CET — Control-Flow Enforcement Technology
CET is one of the most underappreciated security features in modern Intel silicon. It addresses one of the most common exploit techniques: return-oriented programming (ROP), where an attacker doesn't inject new code but instead chains together small snippets of existing executable code — called gadgets — to accomplish their goal. ROP attacks have been the dominant exploitation technique for years precisely because they bypass conventional code injection defenses.
CET has two components. IBT (Indirect Branch Tracking) enforces that indirect branch instructions — calls and jumps that compute their target at runtime — can only land on valid, marked destinations. A ROP gadget in the middle of a function is not a valid target and the CPU will raise a fault. Shadow Stack (SHSTK) maintains a parallel, hardware-protected copy of return addresses. When a function returns, the CPU checks the return address against the shadow stack copy; if they don't match — as they wouldn't in a stack smashing attack — execution halts.
CET has been present on Intel Core since 11th generation (Tiger Lake) and is fully implemented on 12th generation (Alder Lake) and newer. It requires a kernel built with CONFIG_X86_USER_IBT and CONFIG_X86_SHADOW_STACK (both present in mainline kernels since 5.18), and it requires userspace binaries compiled with -fcf-protection=full. The good news: glibc 2.35 and later ships CET-aware on supported platforms, which means your system libraries are already covered. Individual applications vary — you can check any binary with readelf -n /usr/bin/bash | grep -i cet. In practice, CET enforcement on a modern Ubuntu or Fedora install covers the overwhelming majority of the attack surface without any manual intervention.
Memory Protection Keys (PKU/OSPKE)
Memory Protection Keys are a lesser-known feature present on modern Intel desktop silicon that allows a process to partition its own virtual address space into up to sixteen independent protection domains, each with its own read/write/execute permissions that can be toggled with a single userspace instruction — no system call required. The practical application for security is protecting key material: a process handling cryptographic keys can place them in a PKU domain that is marked inaccessible except during the specific window when crypto operations are being performed. An attacker who has achieved code execution in that process cannot simply read memory to extract keys — they would need to explicitly re-enable access to that domain first, which introduces detectable overhead and narrows the exploitation window significantly. PKU is present on Skylake and newer Intel platforms. It is underutilized in practice but genuinely useful for applications that handle sensitive key material, including the kind of credential management tooling pen testers rely on.
Intel SGX — The Asterisk
Intel SGX deserves mention because it appears frequently in security literature and because the kernel config option CONFIG_X86_SGX=y may be present on your system even when SGX is not actually available. SGX provided hardware-isolated memory enclaves — regions the OS and hypervisor literally could not read — and was genuinely valuable for key management on platforms that supported it. It was, however, removed from most consumer desktop silicon starting with 12th generation (Alder Lake). If you don't see /dev/sgx_enclave on your system, you don't have it. The kernel config being set means the driver is compiled in; it doesn't mean the hardware feature exists. Don't build a security architecture around SGX without first confirming hardware availability.
Intel TME — Total Memory Encryption
TME encrypts the entire contents of DRAM at the memory controller level, providing protection against cold boot attacks and physical memory inspection. It is present on select Xeon and some Intel Core Ultra mobile SKUs but is largely absent from desktop consumer silicon. If you're building a desktop, don't count on it. If you're evaluating a laptop purchase with security in mind, checking for TME support is worthwhile.
2.3 AMD Platforms
AMD-Vi and IOMMU
AMD's equivalent of VT-d is AMD-Vi, activated via amd_iommu=on in the kernel cmdline. Coverage across the Ryzen platform is generally good. The same caveats apply as with Intel: must be enabled in UEFI, must be verified active in dmesg, and the default mode of operation should be confirmed as Translated rather than passthrough.
AMD SME — Secure Memory Encryption
SME is AMD's full-memory encryption equivalent to Intel TME. It is available on some Ryzen Pro and EPYC chips and on select consumer Ryzen generations — availability varies significantly by specific CPU model and is not reliably predictable from the product line name alone. Check directly: grep sme /proc/cpuinfo will show the sme flag if the hardware supports it. If it's present and you activate it via mem_encrypt=on in the kernel cmdline, you get full DRAM encryption with a key generated fresh at each power cycle — a meaningful win against cold boot attacks at essentially no user-visible performance cost on modern implementations.
AMD SEV-SNP — The Important Caveat
AMD SEV-SNP (Secure Encrypted Virtualization with Nested Paging) is AMD's flagship confidential computing technology, providing per-virtual-machine hardware memory encryption that even the hypervisor cannot read. It appears prominently in AMD security documentation and gets cited in security articles. It is server and EPYC silicon only. It is not present on any consumer Ryzen desktop or laptop processor. If you encounter recommendations to "use AMD SEV-SNP for your Ryzen workstation," that guidance is incorrect. The consumer AMD security story is solid, but it does not include SEV-SNP.
Platform Security Processor (PSP)
The AMD PSP is a dedicated ARM Cortex-A5 core embedded in every Ryzen processor, running its own firmware below the operating system. It handles fTPM operations, secure boot verification, and various platform management functions. It has been the subject of security research — vulnerabilities in PSP firmware have been disclosed and patched — and it represents a trust anchor that you do not fully control. Keep your AGESA firmware (the AMD firmware stack that includes PSP firmware) updated through your motherboard vendor's BIOS updates. This is not optional hygiene; it's the same category of maintenance as kernel security patches.
2.4 GPU Considerations
Graphics cards are large, complex PCIe devices with their own firmware, their own DMA capabilities, and their own execution environments. They are also frequently overlooked in security discussions because they don't obviously touch "security" workloads. That framing is wrong for several reasons.
IOMMU grouping matters more than most users realize. Your GPU must be in its own IOMMU group to be properly isolated from host memory. If your GPU shares an IOMMU group with other devices, DMA isolation is weakened for everything in that group. Check your IOMMU groups with find /sys/kernel/iommu_groups/ -type l | sort -V and verify your GPU is alone in its group. Motherboard slot assignment affects IOMMU grouping — if you have flexibility in which PCIe slot you use, it's worth checking which gives you cleaner isolation.
AMD discrete GPUs are the better choice for security-focused builds. The amdgpu kernel driver is fully open source, in the mainline kernel tree, and requires no proprietary kernel blobs. This matters for module signing enforcement: if you're forcing signed modules, an AMD GPU works cleanly. The driver has been extensively reviewed, and there's no unsigned firmware-loading mechanism that requires carve-outs in your signing policy. AMD GPU firmware blobs (the binary firmware loaded by the driver at initialization) are loaded from userspace via the standard firmware loading mechanism, which IMA can verify.
NVIDIA presents complications. The proprietary NVIDIA kernel module has historically been unsigned and requires either disabling module signing enforcement or maintaining a specific carve-out in your signing policy. The nvidia-open module — NVIDIA's open kernel module, distinct from the proprietary driver — is the correct path for modern Turing and newer cards and is compatible with module signing. However, coverage is incomplete for older hardware, and the open module has had its own issues. If you're buying new hardware for a security-focused build, an AMD GPU removes an entire class of headache. If you have an existing NVIDIA card, verify your specific model is supported by nvidia-open and use that path.
Intel integrated graphics shares memory space with the CPU by design, which makes the IOMMU configuration particularly important. The i915 driver is in-tree and well-audited. The dmesg output on a properly configured Intel system should show the integrated graphics being added to an IOMMU group and managed under DMAR protection.
2.5 Motherboard and Firmware
UEFI is the non-negotiable baseline. If you are running legacy BIOS mode on a modern self-built machine there is no good reason for it and several security reasons to change. Secure Boot, which underpins the entire measured boot chain, requires UEFI. Every other feature discussed in this article either requires UEFI directly or benefits from the integrity guarantees that Secure Boot provides. Check your boot mode with bootctl status or by examining /sys/firmware/efi — if that path exists, you're in UEFI mode.
The Secure Boot key hierarchy is something most users have never thought about despite having it active. Consumer motherboards ship with Microsoft's Certificate Authority enrolled as the trust anchor for Secure Boot. This means the firmware trusts Microsoft to vouch for what is allowed to boot. Your distro's bootloader is signed by a key that Microsoft has countersigned — that's the chain. This is functional and better than nothing, but it means your root of trust passes through a third party. For most users this is an acceptable tradeoff. For users who want genuine ownership of their trust chain — which includes anyone building a custom kernel for a hardened distribution — replacing or supplementing the Microsoft keys with your own Machine Owner Key (MOK) is the appropriate step. We cover that in detail in Section 3.
Firmware update hygiene is non-negotiable. Gigabyte, ASUS, MSI, ASROCK, and every other major motherboard vendor has shipped UEFI firmware with security vulnerabilities in the past several years. LogoFAIL — a class of vulnerabilities in UEFI image parsers disclosed in late 2023 — affected nearly every major vendor. PixieFail affected network boot implementations. These are not theoretical; they have working exploits and affect machines currently in use. Your UEFI firmware is a significant attack surface that most users update once at build time and then forget. Check your vendor's security advisories regularly. Most modern motherboards support firmware updates from within the UEFI interface or from a USB drive — there is no excuse for running firmware that is multiple versions behind.
Diagnostic: Know Your Hardware Before You Harden It
Before proceeding with any implementation, run the following as root to generate a complete snapshot of your hardware's security capabilities. Save the output externally — you'll reference it throughout the process:
bash
{
echo "=== CPU & SECURITY FLAGS ==="
lscpu | grep -E "Model name|Architecture|Virtualization"
grep -m1 "flags" /proc/cpuinfo | tr ' ' '\n' | \
grep -E "sgx|smx|sme|vmx|aes|sha|tme|pku|ospke|vaes|ibt|shstk" | sort
echo -e "\n=== SGX STATUS ==="
ls /dev/sgx* 2>/dev/null || echo "No SGX devices"
echo -e "\n=== TPM ==="
ls /dev/tpm* 2>/dev/null || echo "No TPM device"
tpm2_getcap properties-fixed 2>/dev/null | head -20
echo -e "\n=== SECURE BOOT ==="
mokutil --sb-state 2>/dev/null
echo -e "\n=== IOMMU ==="
dmesg | grep -iE "iommu|dmar" | grep -iE "enabled|translated|default" | head -10
echo -e "\n=== CPU VULNERABILITIES ==="
grep -r "" /sys/devices/system/cpu/vulnerabilities/
echo -e "\n=== KERNEL SECURITY CONFIG ==="
grep -E "CONFIG_SECURITY|CONFIG_IMA|CONFIG_SGX|CONFIG_TCG_TPM|\
CONFIG_LOCKDOWN|CONFIG_MODULE_SIG|CONFIG_IOMMU|CONFIG_X86_USER_IBT|\
CONFIG_X86_SHADOW_STACK" /boot/config-$(uname -r) 2>/dev/null
echo -e "\n=== LSM STACK ==="
cat /sys/kernel/security/lsm 2>/dev/null
} 2>&1 | tee ~/hw_security_snapshot.txtThe output of this command tells you exactly which features are available, which are active, and which are compiled into your kernel but not yet enabled. The gap between available and active is where most of the practical work in this article lives.
Section 3: The Boot Chain — Where Security Either Begins or Fails
Everything in this article depends on one thing being true: that the software running on your machine is the software you put there. If an attacker can modify your bootloader before your security controls load, those controls are worthless. If they can swap your kernel for one that ignores IOMMU boundaries or disables LSM enforcement, your carefully configured lockdown policy never had a chance. If your encrypted storage unseals before anything verifies the integrity of what's doing the unsealing, the encryption is providing weaker guarantees than you think.
The boot chain is where security either begins with a verified root of trust or doesn't begin at all. It is also, for most self-built Linux machines, the most neglected layer — partly because getting it right requires understanding several interlocking components, and partly because the consequences of misconfiguration range from "system won't boot" to "you've locked yourself out of your own encrypted drive." Neither outcome encourages experimentation.
This section demystifies the chain, explains what each component is actually doing, and gives you the practical implementation steps to get it right — including how to recover when it goes wrong, because it will.
3.1 The Chain of Trust, Step by Step
A properly configured secure boot chain looks like this:
UEFI Firmware (vendor-signed, stored in ROM)
│
▼ [Secure Boot: checks bootloader signature against enrolled keys]
Bootloader (GRUB or systemd-boot, distro-signed or MOK-signed)
│
▼ [Bootloader measures itself into TPM PCRs]
Kernel + initramfs (measured into TPM PCRs)
│
▼ [TPM compares current PCR state to sealed policy]
├─► [MATCH] → LUKS key released → system boots normally
└─► [MISMATCH] → key withheld → system halts, passphrase requiredEach arrow in this diagram is a verification step. Secure Boot verifies cryptographic signatures — it checks that each component was signed by a key the firmware trusts before allowing it to execute. The TPM's PCR measurements are a parallel and complementary record — they don't prevent execution of modified components, but they detect it and refuse to release the storage key to a system that doesn't match the known-good state.
These two mechanisms together — signature verification and measurement — give you something genuinely powerful: an attacker who modifies anything in the boot chain either trips the signature check (which prevents the modified component from running) or changes the PCR state (which prevents storage from unsealing). Bypassing both simultaneously, without physical access to the TPM hardware itself, is not a realistic attack against this threat model.
3.2 Secure Boot — Enabling It Correctly
If Secure Boot is currently disabled, re-enabling it is the first thing to do — and on a system running a mainstream distro with a signed kernel, it should be straightforward.
Re-enable Secure Boot in your UEFI firmware settings. On Gigabyte boards it's typically under the Security tab or the Boot tab depending on firmware version. Save and reboot. If your distribution ships a signed bootloader and signed kernel — Ubuntu, Fedora, Debian, openSUSE, and Arch with the standard signed kernel package all do — the system should boot cleanly. Verify immediately after boot:
bash
mokutil --sb-state
# Expected: SecureBoot enabled
# Confirm the boot was UEFI
ls /sys/firmware/efi && echo "UEFI mode confirmed"If Secure Boot re-enablement breaks your boot, the most common culprits are unsigned kernel modules that the initramfs tries to load, a bootloader that was installed in legacy mode, or a custom kernel you built without signing it. The fix in each case is specific:
- Unsigned modules: identify the module with
dmesg | grep "module verification failed"and either sign it (covered in Section 4) or remove it from the initramfs - Legacy-mode bootloader: reinstall GRUB in UEFI mode with
grub-install --target=x86_64-efi - Custom unsigned kernel: sign it with your MOK key (see 3.3 below) or temporarily boot a signed stock kernel
One important check before re-enabling: Secure Boot enforces that loaded kernel modules are signed with a key trusted by the kernel's embedded keyring. If you have DKMS modules — drivers built against your specific kernel, common for NVIDIA, VirtualBox, ZFS, and others — verify they will load after Secure Boot is active. Most modern DKMS configurations handle signing automatically if the tools are set up correctly, but it's worth a test boot to confirm before you're relying on it.
3.3 Custom MOK Keys — Taking Ownership of Your Trust Chain
The default Secure Boot configuration trusts Microsoft's Certificate Authority. This means your root of trust includes Microsoft as a third party whose keys you cannot audit, rotate, or revoke on your own machine. For most users running stock distro kernels, this is an acceptable tradeoff — the practical risk of Microsoft's CA being abused to compromise your specific boot chain is low. But for users building custom kernels, running a hardened distribution, or who simply want complete ownership of their trust chain, enrolling a Machine Owner Key (MOK) is the right path.
The MOK mechanism allows you to enroll your own signing certificate alongside the existing keys. Components you sign with your MOK key will be trusted by Secure Boot. You can optionally remove the Microsoft keys entirely, though this is an advanced step that requires you to sign everything yourself — including any future kernel updates.
Generating and enrolling a MOK:
bash
# Generate a 4096-bit RSA key pair
openssl req -new -x509 -newkey rsa:4096 \
-keyout /root/MOK.key -out /root/MOK.crt \
-days 3650 -subj "/CN=My Machine Owner Key/" -nodes
# Convert to DER format for enrollment
openssl x509 -in /root/MOK.crt -outform DER -out /root/MOK.cer
# Request enrollment — triggers MOK manager on next boot
mokutil --import /root/MOK.cerOn the next boot, the MOK manager (a UEFI application that runs before the OS) will prompt you to confirm the enrollment with the password you set during the mokutil --import step. This prompt cannot be automated or bypassed remotely — it requires physical presence at the keyboard, which is intentional. After confirmation, your key is enrolled and you can sign kernels and modules with it.
Signing a kernel or module:
bash
# Sign a kernel image
sbsign --key /root/MOK.key --cert /root/MOK.crt \
--output /boot/vmlinuz-signed /boot/vmlinuz-6.x.x
# Sign a kernel module
/usr/src/linux-headers-$(uname -r)/scripts/sign-file \
sha512 /root/MOK.key /root/MOK.crt \
/path/to/module.koCritical: Store your MOK private key (MOK.key) securely and back it up offline. If you remove the Microsoft keys from your UEFI trust store and then lose your MOK private key, you will need to reset Secure Boot to factory defaults from the UEFI firmware interface. This is recoverable, but it requires physical access and a full re-enrollment. The private key should never live only on the encrypted volume it's protecting — that's a circular dependency. Keep it on an encrypted USB drive stored separately.
3.4 TPM2 PCR Sealing — Binding Storage to Boot Integrity
With Secure Boot establishing that your bootloader and kernel are authentic, the next layer is ensuring that your encrypted storage will only unseal to a system in a verified state. This is where TPM2 PCR sealing comes in, and it's where the guarantee against evil maid attacks becomes concrete.
Understanding PCR indices
The TPM maintains 24 Platform Configuration Registers, each representing a different aspect of the boot process. The relevant ones for a Linux workstation are:
| PCR | What it measures |
|---|---|
| 0 | Core firmware executable code (UEFI) |
| 2 | Extended or pluggable firmware code |
| 4 | Boot manager code and boot attempts |
| 7 | Secure Boot state and enrolled keys |
| 9 | Kernel and initramfs (on some configurations) |
| 14 | MOK certificates (if using custom MOK) |
Sealing your LUKS key to PCRs 0+2+4+7 means the key will only be released if the firmware, bootloader, and Secure Boot state are all exactly as they were when the key was sealed. Adding PCR 14 extends this to cover your MOK certificate state.
The practical sealing workflow with systemd-cryptenroll:
systemd-cryptenroll is the correct tool for this on modern systemd-based distributions. It integrates cleanly with /etc/crypttab and handles the key derivation correctly.
bash
# First, verify your LUKS volume is version 2
cryptsetup luksDump /dev/nvme0n1p3 | grep "Version:"
# If Version: 1, convert first:
# cryptsetup convert --type luks2 /dev/nvme0n1p3
# Back up the LUKS header before touching anything
cryptsetup luksHeaderBackup /dev/nvme0n1p3 \
--header-backup-file /external/luks_header_backup.img
# Record current PCR state as your reference baseline
tpm2_pcrread > /external/pcr_baseline_$(date +%Y%m%d).txt
# Enroll the TPM2 as a LUKS keyslot
systemd-cryptenroll --tpm2-device=auto \
--tpm2-pcrs=0+2+4+7 /dev/nvme0n1p3
# Verify the new keyslot appears
cryptsetup luksDump /dev/nvme0n1p3 | grep -A5 "systemd-tpm2"
```
After enrollment, update `/etc/crypttab` to tell the initramfs to attempt TPM2 unsealing before falling back to the passphrase:
```
# /etc/crypttab
luks-uuid UUID=your-uuid-here none tpm2-device=auto,discardRegenerate your initramfs to pick up the change:
bash
update-initramfs -u -k all # Debian/Ubuntu
dracut --force # Fedora/RHELReboot and verify the drive unlocks without prompting for a passphrase. Then — critically — verify that the sealing is actually doing something. Make a trivial change to your kernel cmdline (add a space, change a parameter), reboot, and confirm the TPM refuses to unseal and falls back to the passphrase. If it unseals despite the cmdline change, the PCRs you sealed to don't cover that aspect of the boot state and you need to adjust which PCRs are included.
The passphrase is your recovery path — protect it accordingly. While you're working through this configuration, write the LUKS passphrase on paper and store it physically somewhere secure. Not in a password manager on the machine you're configuring. If the TPM refuses to unseal after a kernel update — which will happen the first time you update your kernel without re-sealing — you need that passphrase to get back in, re-seal to the new PCR values, and restore normal operation.
3.5 Re-sealing After System Updates
This is the operational reality that catches people off guard: every kernel update changes PCR values, which means the TPM will refuse to unseal after the update until you re-seal to the new state. This is correct and expected behavior — it means the system is working — but it requires a small operational procedure on every kernel update.
The workflow after a kernel update:
bash
# 1. Boot using passphrase (TPM unsealing will fail with new kernel)
# 2. Verify the new kernel is running correctly
uname -r
# 3. Remove the old TPM2 keyslot
# Find the slot number first
cryptsetup luksDump /dev/nvme0n1p3 | grep -B2 "systemd-tpm2"
# Remove it (replace N with the token number)
systemd-cryptenroll --wipe-slot=N /dev/nvme0n1p3
# 4. Re-enroll with the current PCR state
systemd-cryptenroll --tpm2-device=auto \
--tpm2-pcrs=0+2+4+7 /dev/nvme0n1p3
# 5. Test unsealing on next rebootSome users automate this with a post-kernel-install hook. That's reasonable once you're confident in the process, but during the initial setup phase do it manually so you understand exactly what's happening at each step.
3.6 Intel TXT and tboot — For Those Who Want the Full Stack
Intel Trusted Execution Technology provides a measured launch environment that runs before the bootloader, feeding PCR measurements from a hardware-verified starting point. Where standard measured boot starts measuring from the UEFI firmware (which is trusted but not independently verified), TXT uses a CPU-verified Authenticated Code Module (SINIT ACM) downloaded from Intel to establish a clean measurement root before anything else runs.
This is an advanced configuration that most users don't need, but it's worth knowing it exists. The practical requirements are: Intel TXT support in your CPU (check smx in your CPU flags — if it's present, TXT is supported), Intel TXT enabled in UEFI, the correct SINIT ACM for your specific CPU downloaded from Intel's website, and the tboot package installed and configured as the pre-bootloader stage.
The payoff is a more rigorous measurement chain where even a sophisticated firmware-level compromise would be detected by the PCR state. For the threat model described in this article — personal use, remote attacker focus — TXT is optional and the standard Secure Boot + TPM2 PCR sealing chain is sufficient. It's included here because if you're building Sable or a similar hardened distribution and want to close every practical gap, TXT is the next step beyond what's described above.
Section 4: Kernel Hardening — What the OS Can Enforce
The boot chain establishes that the software that started is the software you intended. The kernel is what runs everything after that — and on a Linux system, the kernel is both the most powerful component and the largest attack surface that an adversary who gets past the boot chain will attempt to exploit or leverage. A root-level compromise on a default Linux kernel is effectively a kernel compromise waiting to happen. The controls in this section change that equation.
Kernel hardening is not a single configuration. It is a set of overlapping enforcement mechanisms, each closing a different class of attack. Some prevent the kernel itself from being modified at runtime. Some restrict what even privileged processes can do. Some verify the integrity of every file before it executes. Others enforce that hardware resources are accessed only through sanctioned paths. Used together, they create a situation where a remote attacker who achieves root through a software vulnerability finds themselves in a substantially more constrained environment than they would on a default installation — one where the obvious escalation paths are blocked, the kernel itself is protected from modification, and the consequences of the initial compromise are limited by hard enforcement rather than policy suggestions.
The order in which you implement these matters. Some controls depend on others being in place. Some have footguns that will leave you with a non-booting system if you apply them without adequate preparation. We'll cover them in the correct sequence.
4.1 Kernel Lockdown Mode
Lockdown is the most broadly impactful single kernel security setting available to Linux users and the most underused. It is an LSM — a Linux Security Module — that restricts what the running kernel will allow even when the requesting process is root. The fundamental insight behind lockdown is that "root" is a software abstraction, and a sufficiently motivated attacker who achieves root through a software vulnerability should not automatically inherit the ability to modify kernel code and data structures. Root and "can modify the kernel" are different things, and lockdown enforces that distinction.
Lockdown has two modes:
Integrity mode is the correct starting point for most users. In integrity mode, lockdown blocks:
- Direct kernel memory access via
/dev/mem,/dev/kmem, and/proc/kcore— the classic channels for reading or modifying live kernel memory from userspace - Loading of unsigned kernel modules — this is complementary to module signing enforcement and provides an additional enforcement layer
- Hibernation — because a hibernation image contains a full copy of kernel memory, which could be analyzed offline or manipulated to bypass other controls
- kexec of unsigned kernels — prevents a root process from booting a different, potentially backdoored kernel without going through the verified boot chain
- Raw disk access that bypasses the filesystem layer
- MSR (Model-Specific Register) writes that could affect CPU security state
- PCMCIA and other legacy bus driver capabilities that could expose memory
The practical effect: a root process that achieves code execution on a lockdown-enabled system cannot escalate to kernel memory modification through the standard channels. The attack surface for turning root into kernel-level persistence is substantially smaller.
Confidentiality mode extends integrity mode with additional restrictions on reading kernel memory from userspace. This is appropriate for high-assurance deployments but may break some legitimate diagnostic tools. Start with integrity; move to confidentiality once the system is stable.
Enabling lockdown is a single kernel parameter:
bash
# Add to kernel cmdline in /etc/default/grub
GRUB_CMDLINE_LINUX="... lockdown=integrity"
# Apply
update-grub # Debian/Ubuntu
grub2-mkconfig -o /boot/grub2/grub.cfg # Fedora/RHEL
# Verify after reboot
cat /sys/kernel/security/lockdown
# Expected: [integrity] confidentialityNote that lockdown in integrity mode requires Secure Boot to be active on some kernel configurations — specifically, kernels built with CONFIG_SECURITY_LOCKDOWN_LSM_EARLY=y may enforce that lockdown is only activatable when Secure Boot is verified. This is intentional: lockdown without Secure Boot can be trivially bypassed by booting a kernel without lockdown enabled. The two controls are designed to work together.
Lockdown and legitimate tooling conflicts: Some tools that experienced Linux users routinely use will be affected by lockdown. perf with certain profiling modes, gdb kernel debugging, hibernation, and some virtualization configurations have reduced functionality under lockdown integrity mode. For a pen-testing machine, also be aware that some kernel-level debugging and tracing tools used in exploit development will be restricted. This is precisely what you want on a production system — the same capabilities that are useful for security research are useful for attackers. Reserve unrestricted kernel access for dedicated research environments with lower-assurance requirements.
4.2 Kernel Module Signing Enforcement
Kernel modules are code that loads directly into the kernel's address space and executes with full kernel privileges. An unsigned loadable kernel module is a kernel-level code execution primitive that requires no exploit — just insmod. On a default Linux system, root can load any module. Module signing enforcement changes this so that only modules cryptographically signed with a trusted key can be loaded.
The kernel's module signing infrastructure uses a key embedded in the kernel image at build time. This signing key is generated during the kernel build process and used to sign all in-tree modules. Out-of-tree modules built via DKMS need to be signed separately — and on distributions that support it, DKMS does this automatically if configured correctly.
Checking and enabling enforcement:
bash
# Check if module signing is currently required
cat /proc/sys/kernel/modules_disabled
# 0 = not disabled, unsigned modules can load
# 1 = modules disabled entirely (lockdown confidentiality sets this)
# Add to kernel cmdline for enforced module signing
# without fully disabling module loading
module.sig_enforce=1
# Check after reboot that unsigned modules fail explicitly
dmesg | grep "module verification"The DKMS signing workflow is where most users hit friction. Modules built by DKMS — the Dynamic Kernel Module Support system that handles out-of-tree drivers — need to be signed with a key that the kernel trusts. On Ubuntu and Debian, if you've enrolled a MOK key, DKMS can be configured to use it automatically:
bash
# Configure DKMS to use your MOK for signing
# Create /etc/dkms/framework.conf if it doesn't exist
echo 'sign_tool="/usr/lib/linux/dkms/sign_helper.sh"' >> /etc/dkms/framework.conf
# Or sign a specific module manually
/usr/src/linux-headers-$(uname -r)/scripts/sign-file \
sha512 /root/MOK.key /root/MOK.crt \
/lib/modules/$(uname -r)/updates/dkms/your-module.koWhat breaks with module signing enforcement: The common casualties are older NVIDIA proprietary drivers (use nvidia-open as discussed in Section 2), VirtualBox kernel modules (VirtualBox has DKMS-based signing support — ensure it's configured), and any custom out-of-tree driver you've built without signing infrastructure. Each of these has a legitimate path to compliance; the friction is in setup, not in fundamental incompatibility. Budget time to work through these before enabling enforcement in a production configuration.
4.3 IOMMU Strict Mode
We established in Section 2 that IOMMU prevents unauthorized DMA access from PCIe devices. What Section 2 didn't address is that IOMMU has two operational modes that have meaningfully different security properties.
Lazy mode (the default on most Linux configurations) batches IOMMU TLB invalidations for performance. When a DMA mapping is torn down, the IOMMU doesn't immediately flush the corresponding translation lookaside buffer entry — it waits and batches the flush with others. During the window between mapping teardown and TLB flush, a malicious device could theoretically still perform DMA to the now-unmapped region. This window is narrow and difficult to exploit in practice, but it exists.
Strict mode flushes IOMMU TLB entries immediately when a mapping is torn down. No window. The performance cost is real but modest on modern hardware — typically a few percent on I/O-heavy workloads, undetectable on most desktop use cases.
bash
# Add to kernel cmdline
# Intel:
intel_iommu=on iommu=strict
# AMD:
amd_iommu=on iommu=strict
# Verify IOMMU is active and in strict mode after reboot
dmesg | grep -i "iommu" | grep -iE "enabled|strict|translated"Beyond DMA protection, IOMMU enforcement has a specific relevance for pen testers: when you attach hardware to your machine on an untrusted engagement — external network adapters, USB devices, hardware implants you've recovered — IOMMU limits what those devices can do to your host memory even if the device itself is malicious or has been tampered with. A BadUSB device or a PCIe implant that attempts to DMA into host memory hits the IOMMU wall.
4.4 IMA/EVM — Runtime Integrity Verification
IMA (Integrity Measurement Architecture) and EVM (Extended Verification Module) together provide runtime file integrity enforcement — the ability to detect and block execution of tampered binaries, modified libraries, and altered kernel modules. They are among the most powerful and most misunderstood security mechanisms available in the Linux kernel.
What IMA does: At file open or execution time, IMA computes the file's hash and either records it in the TPM (measurement mode) or compares it against a previously stored expected value (appraisal mode). In appraisal mode, a file whose hash doesn't match its stored signature will not execute. Period. A tampered binary — one modified after installation, whether by a malicious package update, a supply chain compromise, or an attacker who achieved write access — fails the IMA check and is blocked.
What EVM does: IMA stores file signatures in extended attributes (security.ima). Without EVM, an attacker who can write to a file's extended attributes could simply update the stored signature to match their tampered binary. EVM prevents this by protecting extended attributes with an HMAC keyed to a secret stored in the TPM — tampering with the attributes invalidates the HMAC, and EVM blocks execution.
Together they form a complete chain: IMA measures content, EVM protects the measurements.
The implementation sequence — order is critical:
The most common way to brick a system with IMA/EVM is to enable enforcement before the measurement database is built. Files that have never been measured have no stored signature; in enforce mode, that means they can't execute, which means nothing works.
bash
# Step 1: Enable IMA in measurement mode first
# Add to kernel cmdline:
# ima_appraise=fix ima_policy=tcb ima_hash=sha256
# 'fix' mode measures files and stores signatures but does NOT enforce
# 'tcb' policy covers executables, kernel modules, and firmware
# Let the system run through its normal workload for at least one full session
# Step 2: Verify the measurement log is being populated
cat /sys/kernel/security/ima/ascii_runtime_measurements | head -20
# Should show growing list of measured files with their hashes
# Step 3: Check for any files that lack signatures
find / -fstype ext4 -not -path "/proc/*" -not -path "/sys/*" \
-exec getfattr -n security.ima {} \; 2>/dev/null | grep -c "security.ima"
# The count should be growing as files are executed
# Step 4: Once you've verified the database looks complete,
# transition to enforcement
# Change kernel cmdline:
# ima_appraise=enforce ima_policy=tcb
```
**IMA policy configuration** is where you tune what gets measured. The `tcb` (Trusted Computing Base) policy is a reasonable starting point that covers executables and kernel modules. A more targeted policy for a pen-testing machine might add measurement of script interpreters and their scripts:
```
# /etc/ima/ima-policy
# Measure all executable file opens
measure func=BPRM_CHECK
# Appraise kernel modules
appraise func=MODULE_CHECK appraise_type=imasig
# Appraise firmware
appraise func=FIRMWARE_CHECK appraise_type=imasig
# Appraise executables
appraise func=BPRM_CHECK appraise_type=imasig
# Don't measure/appraise tmpfs (runtime filesystems)
dont_measure fsmagic=0x01021994
dont_appraise fsmagic=0x01021994EVM initialization:
bash
# Step 1: Generate EVM key and load into kernel keyring
# This is typically handled by systemd via /etc/keys/evm-hmac.key
# or manually:
keyctl add encrypted evm-key "new default user:kmk 32" @u
# Step 2: Start EVM in fix mode (builds HMAC database)
echo "1" > /sys/kernel/security/evm
# Step 3: After database is built, enforce:
echo "2" > /sys/kernel/security/evm
# Or enforce at boot via kernel cmdline: evm=enforceBe aware that EVM in enforcing mode with an incomplete attribute database is one of the more reliable ways to render a system unbootable. Have your recovery USB ready and your passphrase written down before enabling enforcement. The payoff is substantial — a system running IMA/EVM in enforce mode with a TPM-backed EVM key is resistant to binary tampering at a level that no userspace security tool can provide.
4.5 Intel CET in Practice
We covered CET's capabilities in Section 2. Here's what enabling it on a running system actually looks like.
The good news: if you're running a kernel 5.18 or newer (which includes anything shipped by Ubuntu 22.04 and later, Fedora 36 and later) with the appropriate kernel config — CONFIG_X86_USER_IBT=y and CONFIG_X86_SHADOW_STACK=y — CET is already active for any userspace binary that was compiled with CET support. You don't need to add a kernel parameter. The kernel enables IBT and Shadow Stack automatically for each process based on whether the binary's ELF notes indicate CET support.
Check if a binary is CET-enabled:
bash
# Check for CET-aware compilation
readelf -n /usr/bin/bash | grep -i "cet\|property"
# Check if the kernel has CET support compiled in
grep -E "CONFIG_X86_USER_IBT|CONFIG_X86_SHADOW_STACK" /boot/config-$(uname -r)
# Check CET status for a running process
cat /proc/self/status | grep -i "x86"For system libraries — particularly glibc, libssl, and the other high-value targets in an exploit chain — CET coverage on glibc 2.35+ (Ubuntu 22.04, Fedora 36, Arch from mid-2022 onward) means the standard library code that most exploits need to call through is protected. An ROP chain that depends on gadgets inside glibc will fail with a control protection fault.
The practical gap is third-party and proprietary binaries. Closed-source applications and older tooling compiled before CET support became standard are not protected by IBT or Shadow Stack, even on a kernel that fully supports CET. For pen-test frameworks like Metasploit and custom exploit tooling, this is expected and acceptable — you generally don't want your offense tools constrained by defensive hardening. For production user-facing applications, check whether the vendor ships CET-enabled builds and file a bug if they don't.
4.6 The LSM Stack — AppArmor, Landlock, and Yama
The Linux Security Module framework allows multiple security modules to run simultaneously, each making independent allow/deny decisions on security-relevant operations. A modern hardened system should be running several of these in combination. They are complementary, not redundant.
Yama provides process-scope restrictions that tighten ptrace behavior — the mechanism by which one process can inspect and modify the memory of another. By default on Linux, a process with appropriate privileges can ptrace any other process it has permission to access. Yama's ptrace_scope setting restricts this:
bash
# Check current ptrace scope
cat /proc/sys/kernel/yama/ptrace_scope
# Scope levels:
# 0 = default, limited only by DAC permissions (no Yama restriction)
# 1 = restricted: process can only ptrace its own children
# 2 = admin-only: only processes with CAP_SYS_PTRACE can ptrace
# 3 = no ptrace: disabled entirely (breaks debuggers)
# Set scope 2 persistently
echo "kernel.yama.ptrace_scope = 2" >> /etc/sysctl.d/99-security.conf
sysctl --systemScope 2 is the reasonable choice for most security-focused workstations. It breaks some debugging workflows where you'd attach a debugger to an already-running process — gdb -p PID stops working — but it means a compromised process cannot trivially inspect the memory of a credential manager, key storage daemon, or other sensitive process running alongside it.
AppArmor provides mandatory access control through per-application profiles that specify exactly what files, capabilities, and network access each confined process is permitted. Ubuntu ships with a substantial library of pre-written profiles covering common applications. Profiles operate in either complain mode (logs violations but doesn't block) or enforce mode (blocks).
For the use cases in this article, the highest-value AppArmor profiles are those covering network-facing applications — web browsers, email clients, any service that listens on a network port. The practical workflow for deploying a profile for custom tooling:
bash
# Check available profiles and their status
aa-status
# Switch a profile from complain to enforce
aa-enforce /etc/apparmor.d/usr.bin.your-application
# Generate a new profile for an application using aa-genprof
aa-genprof /usr/bin/your-application
# Run the application through its normal workload while aa-genprof watches
# Then finalize the profile
# Reload all profiles
systemctl reload apparmorFor pen testers specifically: running your test frameworks in AppArmor complain mode during engagements and reviewing the logs afterward is a useful way to understand exactly what filesystem and network access your tools are actually using. That audit then informs a more constrained profile for production use.
Landlock is the newest addition to the Linux LSM stack (kernel 5.13+) and has a different model from AppArmor: it allows a process to restrict its own filesystem access without requiring root or predefined profiles. A process can apply Landlock rules to itself, limiting what parts of the filesystem it can read or write — and these rules cannot be loosened once applied, even by the process itself or its children.
The practical application: a network-facing daemon or custom tool that handles untrusted input can confine itself to exactly the directories it legitimately needs to access. An attacker who achieves code execution inside that process inherits the Landlock restriction and cannot access the rest of the filesystem. This is an excellent hardening primitive for security tooling you write yourself:
c
/* Minimal Landlock implementation in C */
struct landlock_ruleset_attr attr = {
.handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE |
LANDLOCK_ACCESS_FS_WRITE_FILE,
};
int ruleset_fd = landlock_create_ruleset(&attr, sizeof(attr), 0);
/* Add rules for specific allowed paths, then: */
landlock_restrict_self(ruleset_fd, 0);Python and Rust bindings exist for Landlock as well — it doesn't require writing C. For any long-running process you deploy on a hardened system that handles external input, adding Landlock self-restriction is a relatively low-effort improvement with meaningful blast-radius reduction.
4.7 Putting It Together — The Kernel Hardening cmdline
A consolidated view of the kernel parameters discussed in this section, suitable as a starting point for /etc/default/grub:
bash
# Kernel lockdown
lockdown=integrity
# Module signing enforcement
module.sig_enforce=1
# IOMMU (Intel — substitute amd_iommu for AMD)
intel_iommu=on iommu=strict
# IMA/EVM (start with fix mode, move to enforce once stable)
ima_appraise=fix ima_policy=tcb ima_hash=sha256
# Spectre/Meltdown mitigations — ensure these are not disabled
# (default on modern kernels, but verify they haven't been turned off
# for performance in a prior configuration)
spectre_v2=on spec_store_bypass_disable=prctl
# Disable legacy USB emulation if not needed (closes firmware attack surface)
usbcore.autosuspend=-1
# Restrict dmesg to root
dmesg_restrict=1After editing /etc/default/grub, always run update-grub and verify the parameters appear correctly in /proc/cmdline after the next boot before assuming they're active.
Section 5: Storage Security — Keys, LUKS, and What Lives Where
Encrypted storage is the control most Linux users have actually implemented. LUKS is well-documented, distribution installers offer it as a checkbox option, and the basic concept — data on disk is ciphertext without the key — is intuitive. But the default LUKS configuration most users end up with after a standard install leaves significant security value on the table, and more importantly, it rests on assumptions about key management that don't survive serious scrutiny.
The central problem is this: LUKS encryption is only as strong as the protection on the key used to unlock it. A LUKS volume protected by a passphrase alone is vulnerable to offline brute-force attack the moment the drive is physically extracted. A LUKS volume unsealed by a TPM-bound key tied to a verified boot state is a fundamentally different proposition — the key doesn't exist independently of the hardware and software state that earned it. Understanding the difference, and closing the gap, is what this section is about.
5.1 LUKS2 — The Required Baseline
LUKS1 is the older format and is still present on many systems that were installed several years ago or migrated from earlier distributions. LUKS2 is required for TPM2 integration via systemd-cryptenroll, supports better key derivation functions, and has a more robust header structure. If you're running LUKS1, migration is the first step.
Check your current format:
bash
cryptsetup luksDump /dev/nvme0n1p3 | grep "Version:"If it returns Version: 1, convert before proceeding with anything else in this section:
bash
# Back up the header first — always
cryptsetup luksHeaderBackup /dev/nvme0n1p3 \
--header-backup-file /external/luks1_header_before_convert.img
# Convert in-place — does not touch data, only header
# The volume must be unmounted (run from live USB if converting root)
cryptsetup convert --type luks2 /dev/nvme0n1p3
# Verify
cryptsetup luksDump /dev/nvme0n1p3 | grep "Version:"
# Expected: Version: 2Key derivation function: LUKS2 defaults to Argon2id for the passphrase-based key slot, which is the correct choice — Argon2id is memory-hard, meaning brute-force attacks require substantial RAM per guess, which limits the parallelism available to an attacker with GPU hardware. Verify it's configured:
bash
cryptsetup luksDump /dev/nvme0n1p3 | grep -A5 "PBKDF:"
# Expected: PBKDF: argon2id
# If it shows pbkdf2, the keyslot predates LUKS2 defaults
# and should be re-enrolledIf you see pbkdf2 on a keyslot, re-enroll the passphrase with Argon2id:
bash
# Add a new passphrase keyslot with Argon2id
cryptsetup luksAddKey --pbkdf argon2id /dev/nvme0n1p3
# Verify the new slot, then remove the old pbkdf2 slot
# (identify slot numbers from luksDump output first)
cryptsetup luksKillSlot /dev/nvme0n1p3 OLD_SLOT_NUMBERTuning Argon2id's memory cost upward is worth doing on a machine with abundant RAM. The default memory parameter is conservative for compatibility. On a machine with 32GB RAM, you can afford to make offline brute-force dramatically more expensive:
bash
# Increase Argon2id iteration cost during key enrollment
cryptsetup luksAddKey --pbkdf argon2id \
--pbkdf-memory 1048576 \ # 1GB memory per hash attempt
--pbkdf-parallel 4 \
/dev/nvme0n1p3An attacker with a GPU farm who extracts your drive cannot run millions of parallel guesses against a keyslot that requires a gigabyte of RAM per attempt. The parameter is stored in the LUKS header and applied automatically during unlock — you don't notice it because unlocking is a one-time operation; an attacker trying to brute-force notices it very much.
5.2 The LUKS Header — Your Most Critical Backup
Before discussing TPM sealing, key rotation, or any modification to your LUKS configuration, this point needs to be stated plainly: the LUKS header is the single most catastrophic thing you can lose or corrupt. The header contains the encrypted master key material, the keyslot definitions, and all the metadata needed to decrypt the volume. Without a valid header, your encrypted data is unrecoverable regardless of whether you know the passphrase, the key, or anything else about the volume. There is no recovery path from a corrupted or missing LUKS header without a backup.
This is not theoretical. LUKS header corruption happens from failed conversions, botched key operations, filesystem errors on the device, accidental dd operations with wrong argument order, and various other mundane mishaps. It happens to careful people.
bash
# Back up the LUKS header to an external location
# Do this before any LUKS operation, including the first TPM enrollment
cryptsetup luksHeaderBackup /dev/nvme0n1p3 \
--header-backup-file /external/luks_header_$(date +%Y%m%d).img
# This file is small (typically a few MB) and contains everything
# needed to restore a corrupted header
# Verify the backup is valid
cryptsetup isLuks /external/luks_header_$(date +%Y%m%d).img \
--header /external/luks_header_$(date +%Y%m%d).img && echo "Header backup valid"
# Store it somewhere physically separate from the drive it backs up
# An encrypted USB drive or a separate machine are appropriate locationsRestore procedure if the header is ever corrupted:
bash
# Restore header from backup
cryptsetup luksHeaderRestore /dev/nvme0n1p3 \
--header-backup-file /external/luks_header_backup.img
```
Keep multiple dated copies. Update the backup after any keyslot modification — TPM enrollment, passphrase change, or keyslot removal all modify the header.
### 5.3 TPM2 PCR Sealing — The Full Operational Picture
Section 3 covered the mechanics of TPM2 PCR sealing in the context of the boot chain. This section addresses the storage-side details and operational realities that determine whether the configuration actually holds up over time.
**The keyslot architecture** on a well-configured LUKS2 volume should look like this:
```
Keyslot 0: Passphrase (Argon2id, high memory cost) — recovery path
Keyslot 1: TPM2-sealed key (systemd-cryptenroll) — operational pathThe passphrase slot is your recovery path. It should exist, it should be strong, and it should be written down and stored physically offline — not in a password manager on the same machine. The TPM slot is your operational path: the machine unseals it automatically on verified boot. You should never need the passphrase during normal operation; its job is to get you back in when the TPM can't, which happens during kernel updates, BIOS updates that change PCR values, and recovery scenarios.
Verifying the keyslot configuration after enrollment:
bash
cryptsetup luksDump /dev/nvme0n1p3 | grep -E "Keyslot|Type:|State:"
# Should show:
# Keyslot 0: ENABLED (luks2 type, argon2id PBKDF)
# Keyslot 1: ENABLED (systemd-tpm2 token)What changes PCR values and breaks automatic unsealing:
Understanding this list is essential for maintaining a TPM-sealed system without constantly being locked out:
| Event | PCRs affected | Action required |
|---|---|---|
| Kernel update | PCR 4, possibly PCR 9 | Re-seal after booting with passphrase |
| GRUB update | PCR 4 | Re-seal |
| BIOS/UEFI firmware update | PCR 0, PCR 2 | Re-seal |
| Secure Boot key changes | PCR 7 | Re-seal |
| MOK changes | PCR 14 | Re-seal (if PCR 14 included) |
| Kernel cmdline changes | PCR 4 | Re-seal |
| initramfs rebuild | PCR 9 | Re-seal (if PCR 9 included) |
Note that PCR 9 is not included in the recommended default sealing policy (0+2+4+7) precisely because initramfs rebuilds are frequent — every kernel update triggers one — and including PCR 9 would require re-sealing every time. The tradeoff is that a tampered initramfs would not be caught by the PCR policy. If you want initramfs verification, include PCR 9 and accept the additional re-sealing step on every update.
Automating re-sealing with a systemd oneshot service is reasonable once you fully understand the manual procedure. The service runs after a successful boot with passphrase input, verifies the system is in a good state, and re-enrolls the TPM keyslot:
bash
# /etc/systemd/system/tpm2-reseal.service
[Unit]
Description=Re-seal LUKS TPM2 keyslot after system update
ConditionPathExists=/run/tpm2-reseal-needed
After=cryptsetup.target
[Service]
Type=oneshot
ExecStart=/usr/local/sbin/tpm2-reseal.sh
RemainAfterExit=yes
[Install]
WantedBy=multi-user.targetThe script itself checks that the system booted cleanly, removes the old TPM keyslot, and re-enrolls with current PCR values. Build this only after you've done the manual procedure enough times to understand every step — automation that breaks in a recovery scenario and you don't understand why is worse than no automation.
5.4 Key Material in Memory — The Honest Problem and Practical Mitigations
Without hardware memory encryption (TME on Intel, SME on AMD), encryption keys that are in active use exist in RAM in plaintext. The LUKS volume key, once unsealed, lives in the kernel's keyring in cleartext. TLS session keys in your browser exist in process memory. SSH private keys loaded into an agent are in memory. A kernel-level attacker can read all of it.
This is the ceiling we identified in Section 1, and it's real. What we can do is minimize the window and the surface:
Disable swap or encrypt it. An unencrypted swap partition is a persistent copy of whatever memory pages the kernel decided to page out — potentially including key material, session tokens, and sensitive data. Options:
bash
# Option 1: Disable swap entirely (viable with 32GB RAM)
swapoff -a
# Remove swap entries from /etc/fstab
# Option 2: Encrypted swap with ephemeral key
# systemd can manage this — each boot generates a fresh random key
# /etc/crypttab entry:
# swap /dev/sdXY /dev/urandom swap,cipher=aes-xts-plain64,size=256
# Option 3: zram (compressed RAM-backed swap — no disk exposure)
# Available in most modern distributions, nothing written to persistent storageDisable hibernation. Hibernation writes a complete copy of RAM — including all in-memory key material — to disk. Lockdown integrity mode disables hibernation as a side effect. If you're not using lockdown, disable it explicitly:
bash
# Prevent hibernation via kernel parameter
systemd.unified_cgroup_hierarchy=1
# Or via systemd
systemctl mask hibernate.target hybrid-sleep.target suspend-then-hibernate.targetMemory Protection Keys for sensitive processes. As discussed in Section 2, PKU allows a process to place key material in a protected memory domain that is inaccessible except during explicit crypto operations. This doesn't prevent a kernel-level attacker from accessing it — kernel code runs above PKU enforcement — but it prevents a userspace attacker who achieves code execution in the same process from simply reading the key out of memory without triggering the PKU-enabled protection. Modern OpenSSL (3.0+) and some HSM libraries use PKU when available.
Minimize key lifetime in memory. The best key is one that isn't in memory when it's not needed. Use PKCS#11 or kernel keyring APIs to keep key material in the kernel's credential store rather than in userspace process memory:
bash
# Store a key in the kernel keyring (inaccessible to userspace after storage)
keyctl add user my-key-label "$(cat /path/to/keyfile)" @u
# Reference it by keyring ID in applications that support kernel keyring integration
# SSH, LUKS, and various security frameworks support thisUse the kernel's trusted key type for the most sensitive material — keys of type trusted are generated inside the TPM and never exist in plaintext outside it:
bash
# Create a TPM-resident trusted key
keyctl add trusted my-trusted-key "new 32" @u
# The key material never appears in plaintext userspace memory
# Operations using it are performed with the plaintext inside the kernel5.5 External Drives and Portable Storage
External drives warrant specific attention because they move — they can be separated from the TPM that protects the primary volume, handed to others, lost, or stolen. The LUKS configuration appropriate for a stationary NVMe protected by a hardware-bound TPM key is different from what makes sense for a portable 4TB backup drive.
For external drives that stay with the machine (plugged in at a known-good workstation, used for backup or overflow storage), TPM2 sealing is appropriate:
bash
# Enroll TPM2 for external drive LUKS volume
systemd-cryptenroll --tpm2-device=auto \
--tpm2-pcrs=0+2+4+7 /dev/sda1
# Add to /etc/crypttab for automatic unlock when the machine is in a verified state
external-drive UUID=your-uuid none tpm2-device=autoFor portable external drives that may be used on other machines or that represent a theft risk, TPM sealing is the wrong model — the TPM is machine-specific and the drive becomes permanently inaccessible on any other machine without the passphrase. Use a strong passphrase with high Argon2id memory cost, stored in a hardware security key (YubiKey or similar FIDO2 device) if you want two-factor protection:
bash
# Enroll a FIDO2 key (YubiKey, etc.) as a LUKS keyslot
systemd-cryptenroll --fido2-device=auto /dev/sda1
# User must physically touch the hardware key to unlock
# Works on any machine with the physical key presentFor backup drives specifically: consider a separate LUKS key for backup volumes, derived from a strong passphrase stored in offline cold storage (printed, in a physically secured location). Your backup is your recovery tool when everything else has failed — it should be accessible even if your primary machine and all its keys are gone.
5.6 The Storage Security Checklist
Before leaving this section, a consolidated verification:
bash
# Verify LUKS2 format
cryptsetup luksDump /dev/nvme0n1p3 | grep "Version:"
# Verify Argon2id on passphrase keyslot
cryptsetup luksDump /dev/nvme0n1p3 | grep -A3 "PBKDF:"
# Verify TPM2 keyslot enrolled
cryptsetup luksDump /dev/nvme0n1p3 | grep "systemd-tpm2"
# Verify LUKS header backup exists and is current
ls -la /external/luks_header_*.img
# Verify swap handling
cat /proc/swaps
swapon --show
# Verify hibernation disabled
systemctl status hibernate.target | grep -i "masked\|disabled"
# Verify kernel keyring accessible
keyctl show @uSection 6: Network Isolation — Namespaces, WireGuard, and Trust Boundaries
Encrypted storage and a hardened kernel protect you when an attacker has achieved local access. Network isolation is the layer that limits what they can do before that — and equally importantly, what a compromised process can do after it has code execution on your machine. These are different problems that the same toolset addresses.
The conventional approach to Linux network security is a firewall — a set of rules that permit or deny traffic based on port, protocol, and address. Firewalls are necessary but insufficient. They operate on the assumption that processes on the same machine are trustworthy peers that should share network access. On a machine running a browser, a VPN client, a BOINC workload, Docker containers, and penetration testing frameworks simultaneously, that assumption is wrong. Different processes have different network access requirements, different trust levels, and different risk profiles. A browser exploit that achieves code execution in your browser process should not inherit the network access of your VPN-connected pen-test tooling, your BOINC worker, or your local services. The architecture that enforces this is network namespaces.
6.1 Network Namespaces as a Security Primitive
Most Linux users encounter network namespaces in the context of containers — Docker, LXC, and similar tools use them to give each container its own isolated network stack. But namespaces are available directly to any user with appropriate privileges and are one of the most powerful network security primitives the Linux kernel provides.
A network namespace is a complete, independent instance of the Linux network stack. It has its own interfaces, its own routing table, its own iptables/nftables rules, and its own socket table. A process running in a network namespace has no visibility into or connectivity with interfaces in other namespaces unless you explicitly create that connectivity. A process in a namespace with no external interface has loopback only — it literally cannot make outbound connections regardless of what code it executes.
Creating and using isolated namespaces:
bash
# Create a namespace with no external connectivity
ip netns add isolated
ip netns exec isolated ip link set lo up
# Verify — this process can only reach loopback
ip netns exec isolated ping 8.8.8.8
# Expected: Network unreachable
# Run a process in the isolated namespace
ip netns exec isolated /bin/bash
# Everything in this shell has loopback onlyCreating a namespace with WireGuard-only connectivity — all traffic must go through the VPN tunnel, no direct internet access:
bash
# Create a namespace for VPN-confined traffic
ip netns add vpn-only
# Create a veth pair connecting root namespace to vpn-only namespace
ip link add veth-root type veth peer name veth-vpn
ip link set veth-vpn netns vpn-only
# Assign addresses
ip addr add 10.200.0.1/24 dev veth-root
ip link set veth-root up
ip netns exec vpn-only ip addr add 10.200.0.2/24 dev veth-vpn
ip netns exec vpn-only ip link set veth-vpn up
# Set default route in vpn-only to go through the veth pair
ip netns exec vpn-only ip route add default via 10.200.0.1
# Now move your WireGuard interface into the vpn-only namespace
# or configure routing so vpn-only traffic exits via WireGuard onlyThe practical security patterns worth implementing on a high-exposure machine:
A key management process handling LUKS keys, SSH agent material, or credential storage runs in a namespace with no network interface at all. Code execution in that process gains nothing from a network perspective — there is no path to exfiltrate keys over the network because no network stack exists in that namespace.
A red-team tooling namespace has connectivity only through a WireGuard tunnel to a known-good endpoint. Tools running there cannot reach your local network resources, your BOINC workers, or your personal data shares. An exploit in something you're testing cannot pivot to your local network.
A general browsing namespace has full internet connectivity but no access to local network services. A browser exploit achieves connectivity to the internet but cannot reach your internal services, your Docker network, or anything on your LAN.
This is not theoretical — it is how serious multi-tenant security infrastructure is designed, and there is no reason the same principles cannot be applied to a personal workstation.
6.2 WireGuard as the Network Perimeter
WireGuard has become the correct default VPN choice for Linux power users for reasons that go beyond performance. Its cryptographic model is modern and conservative — Curve25519 for key exchange, ChaCha20-Poly1305 for symmetric encryption, BLAKE2s for hashing — and its codebase is approximately 4,000 lines compared to OpenVPN's hundreds of thousands. Smaller codebases have smaller attack surfaces. WireGuard runs in the kernel as a first-class network driver, not as userspace software that proxies packets through the kernel.
For a pen-testing machine or a privacy-focused workstation, WireGuard's key property is cryptographic identity. Every peer is identified by its public key. There are no certificates, no CAs, no revocation lists. You control the key material entirely. If a peer key is compromised, you replace it. The simplicity is a security feature.
Basic WireGuard configuration for a client workstation:
bash
# /etc/wireguard/wg0.conf
[Interface]
PrivateKey = <your-private-key>
Address = 10.0.0.4/24
DNS = 10.0.0.1 # Point DNS to a trusted resolver through the tunnel
[Peer]
PublicKey = <server-public-key>
Endpoint = your.server.address:51820
AllowedIPs = 0.0.0.0/0, ::/0 # Route all traffic through tunnel
PersistentKeepalive = 25AllowedIPs = 0.0.0.0/0 is the kill-switch configuration — all traffic routes through the tunnel. If the tunnel drops, traffic stops rather than falling back to the unencrypted path. This is the correct setting for a security-focused configuration.
Binding WireGuard to a specific network namespace gives you the strongest isolation model — the WireGuard interface only exists inside the namespace you put it in, and nothing outside that namespace can route through it accidentally:
bash
# Move an existing WireGuard interface into a specific namespace
ip link set wg0 netns vpn-only
# Or bring up WireGuard directly inside a namespace
ip netns exec vpn-only wg-quick up wg0DNS leak prevention deserves specific attention. Even with all traffic routed through WireGuard, DNS queries can leak if they resolve before the tunnel is established or if a process uses a hardcoded resolver. The correct configuration:
bash
# /etc/systemd/resolved.conf
[Resolve]
DNS=10.0.0.1 # Your trusted resolver, reachable through the tunnel
FallbackDNS= # Empty — no fallback to cleartext resolvers
DNSOverTLS=yes # Encrypt DNS queries
DNSSEC=yes # Validate DNS responsesCombined with WireGuard's DNS= directive in the interface config, this ensures DNS queries go through the tunnel to a trusted resolver and never hit your ISP's resolver in plaintext.
6.3 nftables Default-Deny
The firewall layer is the last line of network defense, not the first — it does not replace namespace isolation but it does catch what namespaces don't address, particularly egress from the root namespace and inter-namespace communication that shouldn't exist.
Default-deny is the only rational policy for a security-focused machine: everything is blocked unless explicitly permitted. The alternative — default-allow with a blocklist — requires you to correctly predict and enumerate every possible unwanted connection, which is not achievable. Default-deny requires you to enumerate only what you actually need, which is a much smaller and more manageable set.
A practical nftables baseline for a hardened workstation:
bash
# /etc/nftables.conf
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
# Allow established connections
ct state established,related accept
# Allow loopback
iif lo accept
# Allow WireGuard (if managing the port locally)
udp dport 51820 accept
# Allow SSH from specific addresses only
# ip saddr 192.168.0.0/24 tcp dport 22 accept
# Log and drop everything else
log prefix "nft-input-drop: " drop
}
chain output {
type filter hook output priority 0; policy drop;
# Allow established connections
ct state established,related accept
# Allow loopback
oif lo accept
# Allow DNS through resolved
udp dport 53 accept
tcp dport 53 accept
# Allow HTTPS
tcp dport 443 accept
# Allow WireGuard tunnel establishment
udp dport 51820 accept
# Allow NTP
udp dport 123 accept
# Log unexpected outbound
log prefix "nft-output-drop: " drop
}
chain forward {
type filter hook forward priority 0; policy drop;
# Explicitly permit inter-namespace forwarding you want
# Everything else drops
log prefix "nft-forward-drop: " drop
}
}The logging directives are important during initial deployment — journalctl -f | grep nft- will show you what you're dropping in real time, which reveals both misconfigured rules and unexpected traffic from processes you didn't know were making connections.
Per-UID output rules are a powerful nftables capability that most users never exploit. Rather than permitting HTTPS from any process, you can permit it only from specific user IDs:
bash
# Only allow outbound HTTPS from the 'browser' user account
# All other processes' HTTPS attempts are dropped
skuid browser tcp dport 443 acceptThis is the nftables equivalent of per-process network namespacing but implemented at the firewall level — useful for constraining processes you can't easily put into a separate namespace.
6.4 For Pen Testers: Operating on Adversarial Networks
When you plug into a client network, connect to a CTF infrastructure, or work on a hostile engagement network, the threat model inverts — the network itself is potentially adversarial. Standard workstation network hygiene is insufficient. These specific practices matter:
Interface-in-namespace before anything else. When you connect to an untrusted network, put that interface in a dedicated namespace before any traffic flows:
bash
# Bring up the interface in an isolated namespace
ip netns add engagement
ip link set enp130s0 netns engagement
# All engagement traffic is now isolated from your primary network stack
ip netns exec engagement dhclient enp130s0
ip netns exec engagement /bin/bash # Your engagement shellYour primary network stack — WireGuard tunnel, local services, personal data — is completely unreachable from the engagement namespace. A pivot from the engagement network into your machine reaches only the engagement namespace, not your main filesystem or services.
MAC address randomization for wireless. On untrusted wireless networks, your hardware MAC address is a persistent identifier that can track your presence across visits:
bash
# Randomize MAC before connecting
ip link set wlp129s0 down
ip link set wlp129s0 address \
$(openssl rand -hex 6 | sed 's/\(..\)/\1:/g; s/:$//')
ip link set wlp129s0 upNetworkManager can handle this automatically if configured:
bash
# /etc/NetworkManager/NetworkManager.conf
[device]
wifi.scan-rand-mac-address=yes
[connection]
wifi.cloned-mac-address=random
ethernet.cloned-mac-address=randomIPv6 discipline. IPv6 is the most commonly overlooked network exposure on Linux workstations. Many users configure careful IPv4 firewall rules and leave IPv6 completely unmanaged, which means an IPv6-capable adversarial network can potentially reach services that your IPv4 rules would block. Either manage IPv6 explicitly or disable it entirely if you're not using it:
bash
# Disable IPv6 system-wide if not needed
# Add to /etc/sysctl.d/99-security.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
sysctl --systemAudit listening services before every engagement. Your machine should have a known-good baseline of what's listening, and you should verify against it before connecting to any untrusted network:
bash
# Full picture of listening services
ss -tlnpu
# Compare against your known-good baseline
ss -tlnpu > /tmp/current_listeners.txt
diff /etc/security/listener_baseline.txt /tmp/current_listeners.txtAnything unexpected in that diff is a conversation to have with yourself about why it's running before you put the machine on an adversarial network.
6.5 Encrypted DNS — Closing the Last Plaintext Channel
Even with WireGuard routing all traffic, DNS deserves explicit attention because it is the one protocol that almost every application on your system uses and that leaks substantial information about your activity even in encrypted form. An observer who can see your DNS queries knows what sites you're visiting, what services you're connecting to, and potentially what tools you're running — without ever decrypting a single byte of your actual traffic.
DNS-over-TLS (DoT) encrypts DNS queries between your resolver and the upstream DNS server. Combined with routing that resolver traffic through your WireGuard tunnel, you get a chain where DNS queries are encrypted and routed through a trusted endpoint before going to an authoritative resolver.
bash
# systemd-resolved with DoT to a trusted upstream
# /etc/systemd/resolved.conf
[Resolve]
DNS=1.1.1.1#cloudflare-dns.com 9.9.9.9#dns.quad9.net
DNSOverTLS=yes
DNSSEC=yes
FallbackDNS=
Cache=yesFor higher assurance, run a local unbound instance that only forwards over DoT and only through the WireGuard interface:
bash
# /etc/unbound/unbound.conf (abbreviated)
server:
interface: 127.0.0.1
access-control: 127.0.0.0/8 allow
outgoing-interface: 10.0.0.4 # WireGuard interface address only
tls-upstream: yes
forward-zone:
name: "."
forward-tls-upstream: yes
forward-addr: 1.1.1.1@853#cloudflare-dns.com
forward-addr: 9.9.9.9@853#dns.quad9.netBinding unbound to your WireGuard interface address means DNS resolution physically cannot happen if the tunnel is down — a clean fail-safe that prevents queries from leaking to your ISP's resolver when the VPN drops.
Section 7: Platform-Specific Build Recommendations
The preceding sections have covered the full security stack in conceptual and technical depth. This section synthesizes that into concrete, hardware-specific recommendations — what you should prioritize, what you can realistically achieve, and what the honest ceiling is for each major platform configuration. Think of this as the decision guide you consult after running the hardware diagnostic block from Section 2 and knowing what your specific machine offers.
The recommendations are structured around the most common self-build configurations. Find your platform, understand your ceiling, and use the priority ordering to sequence your implementation work.
7.1 Intel Core 12th Generation and Newer (Alder Lake, Raptor Lake, Arrow Lake, Meteor Lake)
This is currently the strongest consumer platform for the hardening stack described in this article — not because Intel has a better security story than AMD at the high end, but because the Arrow Lake and Raptor Lake generations ship with a particularly clean vulnerability profile and a well-supported feature set that covers almost everything in this article.
What you have:
The feature set on these platforms is strong across the board. TPM2 is present via fTPM in the PCH. VT-d/IOMMU is present and well-supported. Intel CET — both IBT and Shadow Stack — is fully implemented and works with mainline kernels 5.18 and newer without any special configuration. Memory Protection Keys are present. AES-NI, SHA-NI, and VAES give you hardware-accelerated cryptography that makes encryption overhead essentially invisible. The CPU vulnerability table on Arrow Lake in particular is almost entirely clean — the accumulated microcode baggage of the Spectre/Meltdown era is largely absent on these newer architectures.
What you don't have:
SGX is absent on consumer desktop SKUs from 12th generation onward — the kernel config may show CONFIG_X86_SGX=y but there is no hardware to back it. Do not build anything around SGX on these platforms. TME (Total Memory Encryption) is absent on desktop silicon — present on some mobile Core Ultra SKUs, not on desktop. This means hardware memory encryption is not available and a kernel-level attacker can read process memory. Accept this and compensate architecturally as described in Section 5.
Priority ordering for implementation:
Work through these in sequence, stabilizing each before proceeding:
1. Secure Boot re-enabled, distro-signed chain verified
2. Custom MOK enrolled if running custom kernel builds
3. LUKS2 with Argon2id + TPM2 PCR sealing (PCRs 0+2+4+7)
4. Kernel lockdown=integrity + module.sig_enforce=1
5. intel_iommu=on iommu=strict
6. CET verification (likely already active — confirm with readelf)
7. IMA in fix mode → enforce after measurement database built
8. EVM enforce after IMA is stable
9. nftables default-deny + namespace isolation for high-risk processes
10. Encrypted DNS via systemd-resolved DoT or local unboundThe honest ceiling: Kernel compromise remains the required escalation for an attacker to access in-memory key material and process memory. The architecture described makes reaching kernel level require a working local privilege escalation exploit plus a kernel exploit — two separate, non-trivial capabilities. Automated and opportunistic attacks stop well before that. Targeted attacks by sophisticated adversaries with kernel exploit capability are outside the realistic personal-use threat model.
Specific note for Arrow Lake (Core Ultra 200 series): The microarchitecture change in Arrow Lake removed the hyperthreading that created cross-thread side-channel opportunities in earlier generations. The vulnerability table is clean. If you are on this generation, you are starting from the best available consumer baseline.
7.2 Intel 11th Generation and Older (Tiger Lake, Ice Lake, Comet Lake, Coffee Lake)
Older Intel generations have a more complicated picture — more vulnerability exposure, different feature availability, and in some cases actual SGX hardware that the newer generations dropped.
What changes relative to 12th gen and newer:
SGX may genuinely be present on Tiger Lake and Ice Lake platforms — check ls /dev/sgx_enclave. If it exists, you have real hardware enclave capability that 12th gen users don't. This is meaningful: key material and sensitive computations can be placed inside SGX enclaves where even a kernel-level attacker cannot extract them in plaintext. The gramine project provides a practical framework for running existing applications inside SGX enclaves with modest porting effort. For credential storage, key management, and sensitive data handling, a working SGX implementation on these platforms closes the gap that hardware memory encryption addresses on server silicon.
CET is present on Tiger Lake (11th gen) but absent on Comet Lake (10th gen) and Coffee Lake (8th/9th gen). Check your specific CPU flags: ibt and shstk in /proc/cpuinfo flags confirms CET availability.
The vulnerability profile on older generations is significantly worse. Spectre V2, MDS, L1TF, and various speculative execution vulnerabilities require active microcode mitigations that carry a real performance cost. Verify mitigations are active and not accidentally disabled:
bash
grep -r "" /sys/devices/system/cpu/vulnerabilities/
# Every line should show "Mitigation:" not "Vulnerable"
# Any "Vulnerable" entry requires immediate attentionAdditional kernel parameters for older Intel generations:
bash
# Ensure speculative execution mitigations are not disabled
# (Some guides recommend disabling them for performance — wrong on a security build)
spectre_v2=on
spec_store_bypass_disable=prctl
mds=full,nosmt # Disable hyperthreading if MDS is a concern
l1tf=full,forceDisabling hyperthreading (nosmt) is a significant performance cost — roughly 15-30% on threaded workloads — but closes MDS and certain L1TF attack vectors entirely. Whether that tradeoff is worth it depends on your specific CPU generation's vulnerability exposure and your workload. Check your specific vulnerability table first; on some stepping combinations the microcode mitigations are sufficient without nosmt.
Priority delta from the standard list: If SGX is present, add SGX enclave configuration for key material as a high-priority item between steps 3 and 4. Everything else follows the same sequence.
7.3 AMD Ryzen (Consumer Desktop — Zen 2 and Newer)
The AMD consumer platform is a strong foundation with one important difference from Intel: the SME (Secure Memory Encryption) availability question, which varies by specific SKU and is worth resolving before planning your architecture.
Checking for SME:
bash
# Check if hardware supports SME
grep -w "sme" /proc/cpuinfo
# Check if SME is currently active
dmesg | grep -i "memory encryption\|sme\|amd_mem_enc"
# Also check kernel config
grep "CONFIG_AMD_MEM_ENCRYPT" /boot/config-$(uname -r)If sme appears in your CPU flags and CONFIG_AMD_MEM_ENCRYPT=y is in your kernel config, you have the hardware and kernel support. Activating it:
bash
# Add to kernel cmdline
mem_encrypt=on
# Verify activation after reboot
dmesg | grep "AMD Memory Encryption"
# Expected: AMD Memory Encryption: ENABLED (100000000 pages)With SME active, DRAM contents are encrypted with a key generated fresh at each power cycle and stored in the CPU. A cold boot attack against a machine with SME active recovers only ciphertext — the key is gone when power is removed. This is a significant win that closes the hardware memory encryption gap without requiring server silicon.
SEV-SNP reminder: AMD SEV-SNP is EPYC/server only. Ryzen consumer desktop does not have it. The documentation and marketing around AMD confidential computing can create the impression that SEV-SNP is broadly available — it is not on your desktop Ryzen regardless of generation.
Platform Security Processor hygiene: Keep your motherboard BIOS updated regularly on AMD platforms — AGESA firmware updates often include PSP security patches. The PSP is your fTPM host and a trust anchor for the entire boot chain; its firmware security matters directly to your threat model. On Gigabyte AMD boards, the BIOS update utility is in the Q-Flash interface at boot.
AMD-specific IOMMU note: AMD-Vi generally has good coverage across Ryzen but IOMMU group assignments vary significantly by motherboard and PCIe slot configuration. After enabling amd_iommu=on, verify your GPU and other high-value devices are in appropriate groups:
bash
# Check IOMMU group assignments
for iommu_group in /sys/kernel/iommu_groups/*/devices/*; do
echo "Group $(basename $(dirname $iommu_group)): \
$(lspci -nns ${iommu_group##*/} 2>/dev/null)"
done | sort -V
```
Devices sharing an IOMMU group cannot be fully isolated from each other. If your GPU shares a group with other devices, changing PCIe slot assignments on the motherboard sometimes resolves this — consult your motherboard manual for slot-to-IOMMU-group mapping.
**Priority ordering for AMD Ryzen:**
```
1. Secure Boot re-enabled and verified
2. BIOS/AGESA firmware current — check vendor site
3. Check and activate SME if available (mem_encrypt=on)
4. LUKS2 with Argon2id + TPM2 PCR sealing (PCRs 0+2+4+7)
5. Kernel lockdown=integrity + module.sig_enforce=1
6. amd_iommu=on iommu=strict + verify IOMMU group assignments
7. IMA/EVM sequence (fix → enforce)
8. nftables default-deny + namespace isolation
9. Encrypted DNSIf SME is available, step 3 is the highest single-value improvement available on this platform — it addresses the hardware memory encryption gap that Intel desktop users simply cannot close.
7.4 GPU-Specific Considerations by Vendor
GPU choice has a non-obvious impact on your kernel hardening stack, primarily through the module signing and IOMMU integration stories. Here is the practical breakdown.
AMD Discrete GPU (RDNA 2 and newer — recommended for security builds):
The amdgpu driver is fully open source, in the mainline kernel tree, and requires no proprietary kernel module. Module signing enforcement works cleanly — the driver is signed as part of the standard kernel module build. IOMMU integration is tight and well-tested. Firmware blobs are loaded through the standard Linux firmware loading mechanism, which IMA can measure and verify.
There is no carve-out required in your signing policy, no proprietary module to audit, and no additional steps needed beyond the standard hardening sequence. If you are buying hardware for a new security-focused build, an AMD GPU removes an entire category of integration headache.
Your current setup — AMD Radeon (RDNA architecture) on Intel Core Ultra — is actually the ideal configuration from a security standpoint. The open driver path is clean, and the Intel/AMD split means you have both the integrated Intel graphics (well-isolated via IOMMU) and the discrete AMD card in a known-good driver configuration.
NVIDIA (Current Hardware):
The path for NVIDIA on a security-hardened system is nvidia-open — NVIDIA's open kernel module, available for Turing (RTX 20 series) and newer. It is not the same as the Nouveau open-source driver; it is NVIDIA's own open-source kernel module that replaces the fully proprietary one.
bash
# Install nvidia-open instead of the proprietary module
sudo apt install nvidia-kernel-open-dkms
# Verify which module is loaded
lsmod | grep nvidia
# Should show nvidia_drm, nvidia_modeset, nvidia_uvm — open variantsWith nvidia-open and a properly configured DKMS signing setup, module signing enforcement is achievable. The open module can be signed with your MOK key through the standard DKMS signing workflow. Older cards — Maxwell, Pascal — are not supported by nvidia-open and require the proprietary module, which means accepting the signing gap or maintaining a specific unsigned module carve-out in your policy.
For pen testers specifically: if you need CUDA for GPU-accelerated password cracking (hashcat, etc.) on a hardened system, nvidia-open is the path that preserves both CUDA functionality and module signing compatibility on supported hardware.
Intel Integrated Graphics:
Intel integrated graphics on 12th gen and newer (Arc architecture integrated into the SoC) uses the i915 driver, which is in-tree, well-audited, and entirely compatible with the full hardening stack. One specific IOMMU consideration: the dmesg line DMAR: Skip IOMMU disabling for graphics that appears on some Intel configurations indicates the firmware has requested that IOMMU protection be skipped for the integrated GPU — a legacy compatibility accommodation. Verify your IOMMU is still active for other devices even if this message appears.
7.5 Motherboard Considerations
Motherboard firmware is the component most users update once and forget. For the threat model in this article — particularly for pen testers and anyone with a high-value machine — firmware currency is not optional hygiene, it is a security control.
Gigabyte Z890 / Z790 / B650 series (and similar high-end consumer boards):
Gigabyte boards support BIOS profile export and import — a useful feature for preserving your security configuration across firmware updates. Before every BIOS update, export your profile; after updating, verify Secure Boot settings and TPM configuration survived intact. Firmware updates sometimes reset Secure Boot to disabled or change key enrollment state.
bash
# After any BIOS update, always verify immediately:
mokutil --sb-state
ls /dev/tpm*
tpm2_pcrread | head -10
# If PCR values changed unexpectedly after a firmware update,
# that is expected — BIOS updates change PCR 0 and PCR 2
# Re-seal your LUKS TPM keyslot after verifying the new state is correct
```
**Firmware update workflow that preserves your security configuration:**
```
1. Export current BIOS profile to USB
2. Record current PCR values: tpm2_pcrread > /external/pre-update-pcrs.txt
3. Note LUKS passphrase is accessible (you will need it after the update)
4. Apply firmware update
5. Verify Secure Boot state — re-enable if reset
6. Verify TPM2 accessible
7. Boot — enter LUKS passphrase (TPM unsealing will fail due to PCR 0/2 change)
8. Re-seal TPM keyslot with new PCR values
9. Test automatic unsealing on next rebootThis procedure sounds tedious written out. In practice it takes about ten minutes and needs to happen at most a few times per year on a typical firmware update cadence. Missing any step — particularly the re-sealing — leaves you in a degraded state where the security chain is broken until corrected.
Discrete TPM modules: If your motherboard has a TPM header and you're currently relying on fTPM, a discrete TPM2 module is a modest security improvement. Discrete TPMs are separate microcontrollers with their own tamper-resistant storage; fTPMs run as firmware in the chipset and have had firmware vulnerabilities. The practical risk difference for a personal workstation threat model is small but non-zero. If you're building a new system and security is a primary concern, it's a worthwhile $20-30 addition.
7.6 A Note on Mixed Configurations
Many self-built machines end up with combinations not neatly covered by any single platform section — Intel CPU with AMD GPU, older CPU with newer motherboard, multiple storage devices with different encryption requirements. A few general principles for navigating mixed configurations:
The CPU determines your core security feature set — TPM2 interface, IOMMU capability, CET availability, and memory encryption availability all live in the CPU and chipset. Start your feature inventory with the CPU.
The GPU determines your module signing complexity — AMD simplifies it, NVIDIA requires attention, Intel integrated requires no special handling. This affects implementation sequencing but not the end-state achievability.
The motherboard determines firmware security hygiene — update cadence, Secure Boot key management, and TPM type. A good CPU in a poorly maintained motherboard is a security liability.
When features conflict: If enabling a feature on one component breaks another — the most common case being module signing enforcement breaking a proprietary driver — the correct resolution is almost always to find the open path for the problematic component rather than carving out an exception in the security policy. Exceptions in security policy have a way of becoming permanent and expanding over time. The open path, once found and configured, is self-maintaining.
Section 8: Recovery Planning — Because You Will Break Things
This section exists because every other section in this article describes ways to make your system refuse to boot, refuse to decrypt, or refuse to load drivers when something doesn't match expectations. That is precisely what a hardened system is supposed to do. The controls we've implemented are not polite suggestions — they are hard enforcement mechanisms that will trigger on misconfiguration, on software updates that change measured state, on a BIOS update you forgot would affect PCR values, and on mistakes you make at two in the morning when you're trying to fix something else.
None of that is a reason not to implement these controls. It is a reason to have a recovery infrastructure that is as carefully built as the hardened system itself. A security configuration you abandon after the first unrecoverable lockout is worse than no security configuration — it gives you false confidence during the window it was running and leaves you on a default install afterward.
The philosophy here is the same one that underlies the whole article: threat model first, then architecture. The threat you're modeling in recovery planning is yourself — specifically, future you, under pressure, locked out of an encrypted drive, trying to remember what PCR indices you sealed to three months ago. Plan for that person. Leave them good tools.
8.1 The Non-Negotiable Pre-Hardening Checklist
Before you change a single security setting on a machine you care about, these items must exist. Not "should exist" — must. This is the minimum viable recovery kit.
Full block device backup:
bash
# Image the entire NVMe to your external drive
# Adjust device paths to match your configuration
dd if=/dev/nvme0n1 \
of=/mnt/external/kubuntu_baseline_$(date +%Y%m%d).img \
bs=4M status=progress conv=fsync
# For large drives where most space is empty,
# partclone is more space-efficient
sudo apt install partclone
partclone.ext4 -c -s /dev/nvme0n1p3 \
-o /mnt/external/root_partition_$(date +%Y%m%d).imgAt your NVMe speed, a full dd image of the root partition takes time but runs unattended. Start it before you go to bed the night before your first hardening session. Wake up to a complete baseline.
LUKS header backup — mandatory, not optional:
bash
# Back up every LUKS volume header separately
# Root NVMe partition
cryptsetup luksHeaderBackup /dev/nvme0n1p3 \
--header-backup-file /external/luks_nvme0n1p3_$(date +%Y%m%d).img
# External 4TB drive
cryptsetup luksHeaderBackup /dev/sdb1 \
--header-backup-file /external/luks_sdb1_$(date +%Y%m%d).img
# Verify each backup
cryptsetup isLuks --verbose \
--header /external/luks_nvme0n1p3_$(date +%Y%m%d).img \
/external/luks_nvme0n1p3_$(date +%Y%m%d).img \
&& echo "Header backup verified"These files are small — typically 2-4MB each — and contain everything needed to recover a corrupted LUKS header. Without them, a corrupted header means permanent data loss regardless of whether you know the passphrase. Store them somewhere physically separate from the drives they back up.
PCR state snapshot:
bash
# Record full PCR state before any changes
tpm2_pcrread > /external/pcr_baseline_$(date +%Y%m%d).txt
# Also record which PCRs you seal to — document your own configuration
cat > /external/tpm2_sealing_config.txt << EOF
Date: $(date)
Device: /dev/nvme0n1p3
UUID: $(cryptsetup luksDump /dev/nvme0n1p3 | grep UUID | head -1)
Sealed PCRs: 0+2+4+7
Kernel version at sealing: $(uname -r)
EOFThe PCR snapshot becomes invaluable when you're debugging why the TPM won't unseal — you can diff the current PCR state against the baseline and immediately see which register changed and infer what component modification caused it.
LUKS passphrase written down, stored physically offline:
This cannot be a digital note, a password manager entry on the same machine, or a file on the encrypted drive itself. It must be physical — written on paper, stored somewhere secure and separate from the machine. During the experimentation phase, you will enter this passphrase more than you expect. After the configuration stabilizes you may not need it for months, and then you will need it urgently during a kernel update that changes PCR values. The paper copy needs to exist and needs to be findable.
BIOS profile export:
On your Gigabyte Z890, enter the UEFI firmware interface (Delete at POST), navigate to the Save & Exit or Favorites section, and export your current profile to a USB drive. This captures your Secure Boot key enrollment, boot order, TPM settings, and all other firmware configuration. A BIOS update or accidental reset that wipes these settings can be recovered in minutes with a profile restore rather than hours of manual reconfiguration.
Verified recovery USB:
bash
# Download your distro's live ISO and verify its signature
# Ubuntu example:
wget https://releases.ubuntu.com/24.04/ubuntu-24.04-desktop-amd64.iso
wget https://releases.ubuntu.com/24.04/SHA256SUMS
wget https://releases.ubuntu.com/24.04/SHA256SUMS.gpg
# Verify the signature
gpg --keyserver hkp://keyserver.ubuntu.com \
--recv-keys 0x843938DF228D22F7B3742BC0D94AA3F0EFE21092
gpg --verify SHA256SUMS.gpg SHA256SUMS
sha256sum -c SHA256SUMS --ignore-missing
# Write to USB
dd if=ubuntu-24.04-desktop-amd64.iso of=/dev/sda bs=4M status=progressThe recovery USB must include tpm2-tools and cryptsetup at minimum. Ubuntu live environments include cryptsetup; install tpm2-tools in the live session if needed with sudo apt install tpm2-tools. Test that the USB actually boots and can access your encrypted volumes before you need it as a recovery tool.
8.2 The Failure Mode Reference
Understanding what each failure mode looks like and how to recover from it before you encounter it under pressure is the difference between a thirty-minute recovery and a four-hour panic.
Failure: TPM refuses to unseal after kernel update
Symptom: System boots, prompts for LUKS passphrase instead of unlocking automatically. No error message — it just asks for the passphrase.
Cause: Kernel update changed PCR 4 (and possibly PCR 9). The new PCR values don't match the sealed policy.
Recovery:
bash
# 1. Enter LUKS passphrase to boot normally
# 2. Verify the new kernel is running correctly
uname -r
# 3. Find and remove the old TPM2 token
cryptsetup luksDump /dev/nvme0n1p3 | grep -B2 -A5 "tpm2"
# Note the token number
systemd-cryptenroll --wipe-slot=tpm2 /dev/nvme0n1p3
# Or specify by slot number:
# systemd-cryptenroll --wipe-slot=N /dev/nvme0n1p3
# 4. Re-enroll with current PCR state
systemd-cryptenroll --tpm2-device=auto \
--tpm2-pcrs=0+2+4+7 /dev/nvme0n1p3
# 5. Test on next rebootPrevention: This is expected behavior, not a failure. Budget five minutes after every kernel update for re-sealing. Consider setting up a post-update hook once you've done the manual procedure enough times to trust your automation.
Failure: Module signing enforcement breaks boot-critical driver
Symptom: System fails to boot or loses network/storage access after enabling module.sig_enforce=1. dmesg shows module verification failed for specific modules.
Cause: A module required during boot — often in the initramfs — is not signed or is signed with an untrusted key.
Recovery:
bash
# Boot with module signing temporarily disabled
# Add to kernel cmdline at GRUB prompt (press 'e' to edit):
module.sig_enforce=0
# Once booted, identify the failing module
dmesg | grep -i "module verification\|required key"
# Sign it with your MOK key
/usr/src/linux-headers-$(uname -r)/scripts/sign-file \
sha512 /root/MOK.key /root/MOK.crt \
/lib/modules/$(uname -r)/path/to/module.ko
# Rebuild initramfs to include the signed version
update-initramfs -u -k all
# Re-enable module signing enforcement
# Remove module.sig_enforce=0 from cmdline and rebootFailure: IMA enforce mode prevents boot
Symptom: System hangs or fails during boot with cryptic access denied errors. Files that were accessible before are suddenly inaccessible.
Cause: IMA enforcement enabled before the measurement database was fully built. Files that have never been executed lack security.ima extended attributes and are denied execution in enforce mode.
Recovery:
bash
# Boot with IMA appraisal disabled
# Add to kernel cmdline at GRUB prompt:
ima_appraise=off
# Once booted, rebuild the IMA attribute database
# for all files in the policy scope
find / -fstype ext4 \
-not -path "/proc/*" \
-not -path "/sys/*" \
-not -path "/dev/*" \
-exec head -c 1 {} \; > /dev/null 2>&1
# This forces IMA to measure and record all files it touches
# Re-run your normal system workload to capture remaining files
# Verify database looks complete before switching back to enforce
cat /sys/kernel/security/ima/ascii_runtime_measurements | wc -l
# Switch back to fix mode first, verify, then enforce
# Change cmdline: ima_appraise=fix → verify → ima_appraise=enforceFailure: Secure Boot re-enabling breaks GRUB
Symptom: After re-enabling Secure Boot, GRUB either fails to load entirely or loads but won't boot the kernel.
Cause: Either the bootloader wasn't installed in UEFI mode, the GRUB binary isn't signed, or a shim configuration issue.
Recovery:
bash
# Boot from recovery USB with Secure Boot temporarily disabled in UEFI
# Mount your system
cryptsetup open /dev/nvme0n1p3 luks-root
mount /dev/mapper/luks-root /mnt
mount /dev/nvme0n1p1 /mnt/boot/efi # Adjust EFI partition as needed
# Chroot in
for d in dev proc sys run; do mount --bind /$d /mnt/$d; done
chroot /mnt
# Reinstall GRUB in UEFI mode with Secure Boot support
apt install --reinstall grub-efi-amd64-signed shim-signed
grub-install --target=x86_64-efi --efi-directory=/boot/efi
update-grub
# Exit chroot, unmount, reboot
# Re-enable Secure Boot in UEFIFailure: LUKS header corruption
Symptom: cryptsetup open fails with "Device /dev/nvme0n1p3 is not a valid LUKS device" or similar. Drive appears to exist but cannot be decrypted.
Cause: Header corruption from power loss during a key operation, a failed conversion, or physical storage error.
Recovery:
bash
# This is why the header backup is mandatory
cryptsetup luksHeaderRestore /dev/nvme0n1p3 \
--header-backup-file /external/luks_nvme0n1p3_YYYYMMDD.img
# Verify restoration
cryptsetup luksDump /dev/nvme0n1p3
# Attempt to open with passphrase
cryptsetup open /dev/nvme0n1p3 luks-rootWithout the header backup, this scenario has no recovery path. The data is unrecoverable. This is why that backup exists.
Failure: EVM enforce mode breaks file access
Symptom: After enabling evm=enforce, file operations fail with permission errors even for root. System may become unusable.
Cause: EVM HMAC database incomplete or EVM key not properly initialized before enforcement.
Recovery:
bash
# Boot with EVM disabled
# Add to kernel cmdline:
evm=fix
# Rebuild HMAC database
# EVM requires the EVM key to be loaded first
keyctl add trusted evm-key "new 32" @u
# Set EVM to fix mode (builds database without enforcing)
echo "1" > /sys/kernel/security/evm
# Run full system workload to populate database
# Then verify before switching to enforce
cat /sys/kernel/security/evm8.3 The Live USB as Master Key
Your recovery USB is not just a rescue tool — it is the master key to every recovery operation described above. It can access your encrypted volumes (with the passphrase), modify your boot configuration, restore LUKS headers, re-sign kernel modules, and re-enroll TPM2 keyslots. Treat it with the security consideration that implies.
The recovery USB should be:
- Stored physically secure, not left plugged in or in a visible location
- Tested before it is needed — boot it, confirm it can run
cryptsetupandtpm2-tools, confirm it can mount your encrypted volumes - Updated when your distribution releases significant updates — a live USB that is two years old may lack the tools or kernel support needed for your current configuration
- Encrypted if it contains your MOK private key — a LUKS-encrypted partition on the USB with a strong passphrase protects your signing key while keeping it accessible when you genuinely need it
Essential tools to verify are present on the live environment:
bash
# Run these from the live USB session to verify your toolkit
which cryptsetup && cryptsetup --version
which tpm2_pcrread && tpm2_pcrread --version
which mokutil && mokutil --version
which openssl && openssl version
which sbsign && sbsign --version # For re-signing kernels
which dd && which partclone # For backup/restore operationsIf any of these are missing from the live environment, install them in the live session with apt install before using the USB for recovery. Consider creating a custom live ISO with all tools pre-installed if recovery scenarios are frequent enough to warrant it.
8.4 Staged Implementation as Risk Management
The most effective way to avoid needing the recovery procedures above is to implement changes in stages with full testing between each stage. This sounds obvious and is routinely ignored.
A safe staging cadence:
One change per session. Enable Secure Boot in one session. Verify it's working. Come back the next day for TPM2 PCR sealing. Do not stack changes on top of untested changes.
Test the adverse case deliberately. After sealing LUKS to TPM PCRs, deliberately make a change that should break unsealing — add a kernel parameter, modify the cmdline — and verify the system correctly demands a passphrase. If it unseals anyway, your sealing configuration is not doing what you think. Better to discover this during controlled testing than during an actual security incident.
Keep a running log. A text file recording what you changed, when, and what the PCR values were at the time costs almost nothing and saves significant time when debugging a recovery scenario three months later.
bash
# /root/hardening_log.txt — add an entry after every change
echo "$(date): Enrolled TPM2 keyslot on nvme0n1p3, PCRs 0+2+4+7, kernel $(uname -r)" \
>> /root/hardening_log.txt
tpm2_pcrread >> /root/hardening_log.txt
echo "---" >> /root/hardening_log.txtThat log, combined with the PCR baseline snapshots, is everything you need to reconstruct what state the system was in at any point in its hardening history.
Section 9: Practical Implementation Sequence
Everything in the preceding sections has been building toward this: a concrete, ordered sequence of implementation steps that takes a standard self-built Linux machine from its default configuration to a genuinely hardened one. The sequence matters as much as the individual steps. Controls that depend on other controls being stable first will cause failures if applied out of order. Changes that are individually benign become problematic when stacked on top of untested changes.
This section is structured as a phased checklist. Each phase should be fully stable and tested before the next phase begins. "Stable" means: system boots cleanly, all intended functionality works, you have verified both the expected behavior and the adverse case, and you've documented what you changed and what the system state is. A phase that takes a week to stabilize is not a phase that went wrong — it's a phase that went right.
The checklist is written for a systemd-based distribution — Ubuntu, Kubuntu, Fedora, Arch with systemd — on the Intel Core Ultra / AMD Ryzen class hardware described throughout this article. Adjust package manager commands and file paths for your specific distribution where noted.
Phase 1 — Foundation
Everything depends on this phase being solid. Do not proceed to Phase 2 until every item here is verified.
1.1 Create the recovery baseline
bash
# Identify all LUKS devices
lsblk -o NAME,FSTYPE | grep -i crypt
# Back up LUKS headers for every encrypted volume
cryptsetup luksHeaderBackup /dev/nvme0n1p3 \
--header-backup-file /mnt/external/luks_nvme0n1p3_$(date +%Y%m%d).img
# Verify the backup
cryptsetup isLuks --verbose \
--header /mnt/external/luks_nvme0n1p3_$(date +%Y%m%d).img \
/mnt/external/luks_nvme0n1p3_$(date +%Y%m%d).img
# Full partition image (run overnight if needed)
dd if=/dev/nvme0n1 \
of=/mnt/external/nvme0n1_baseline_$(date +%Y%m%d).img \
bs=4M status=progress conv=fsync
# Record PCR baseline
tpm2_pcrread > /mnt/external/pcr_baseline_$(date +%Y%m%d).txt
# Record all UUIDs
blkid > /mnt/external/blkid_$(date +%Y%m%d).txt
lsblk -f >> /mnt/external/blkid_$(date +%Y%m%d).txt- LUKS header backup exists and is verified
- Full partition image complete
- PCR baseline recorded
- UUIDs recorded externally
- LUKS passphrase written on paper, stored physically offline
- BIOS profile exported to USB from UEFI firmware interface
1.2 Verify TPM2 is present and functional
bash
# Confirm TPM2 devices exist
ls -la /dev/tpm*
# Expected: /dev/tpm0 and /dev/tpmrm0
# Install tpm2-tools if not present
sudo apt install tpm2-tools # Debian/Ubuntu
sudo dnf install tpm2-tools # Fedora
# Verify TPM2 responds
tpm2_getcap properties-fixed | head -20
# Read current PCR state — should return values, not errors
tpm2_pcrread sha256:0,2,4,7/dev/tpm0presenttpm2-toolsinstalledtpm2_pcrreadreturns values without errors
1.3 Enable Secure Boot
Re-enable Secure Boot in UEFI firmware. On Gigabyte Z890 boards, enter UEFI with Delete at POST, navigate to Settings → Security → Secure Boot, enable it, save and exit.
bash
# After reboot, verify Secure Boot is active
mokutil --sb-state
# Expected: SecureBoot enabled
# Verify the boot is UEFI mode
[ -d /sys/firmware/efi ] && echo "UEFI confirmed" || echo "Legacy mode — fix this first"
# Check for any module verification failures from the boot
dmesg | grep -i "module verification\|required key\|could not insert"
# Any output here needs to be resolved before proceedingIf Secure Boot re-enablement causes any boot failure, refer to Section 8.2 for the GRUB recovery procedure before continuing.
- Secure Boot enabled in UEFI
mokutil --sb-statereturns enabled- No module verification failures in
dmesg - All intended functionality (network, GPU, storage) works normally
1.4 Verify LUKS2 format and Argon2id
bash
# Check LUKS version on every encrypted device
cryptsetup luksDump /dev/nvme0n1p3 | grep "Version:"
# If Version: 1, convert (from live USB, volume unmounted):
# cryptsetup convert --type luks2 /dev/nvme0n1p3
# Verify Argon2id on passphrase keyslot
cryptsetup luksDump /dev/nvme0n1p3 | grep -A3 "PBKDF:"
# Expected: argon2id
# If pbkdf2: re-enroll passphrase with argon2id as described in Section 5- All LUKS volumes confirmed as version 2
- All passphrase keyslots confirmed using Argon2id
- LUKS header backups updated if any conversions were performed
1.5 Enroll custom MOK keys (if building custom kernels)
Skip this step if running an unmodified distro kernel — the distro signing chain is sufficient for Phase 1. Return to this step before Phase 3 if you plan to build custom kernels.
bash
# Generate MOK key pair
openssl req -new -x509 -newkey rsa:4096 \
-keyout /root/MOK.key -out /root/MOK.crt \
-days 3650 -subj "/CN=$(hostname)-MOK/" -nodes
# Convert to DER for enrollment
openssl x509 -in /root/MOK.crt -outform DER -out /root/MOK.cer
# Request enrollment
mokutil --import /root/MOK.cer
# Reboot and confirm enrollment in MOK manager at next boot promptbash
# After reboot, verify MOK enrollment
mokutil --list-enrolled | grep -A3 "$(hostname)-MOK"
# Back up the MOK private key externally — critical
cp /root/MOK.key /mnt/external/MOK_$(hostname)_$(date +%Y%m%d).key
# Encrypt it:
gpg --symmetric --cipher-algo AES256 \
/mnt/external/MOK_$(hostname)_$(date +%Y%m%d).key- MOK key pair generated (if applicable)
- MOK enrolled and confirmed via
mokutil --list-enrolled - MOK private key backed up to encrypted external storage
Phase 2 — Storage Binding
Phase 1 must be fully stable before starting Phase 2. You will need the LUKS passphrase during this phase — confirm it's accessible before proceeding.
2.1 Enroll TPM2 as LUKS keyslot
bash
# Record PCR values immediately before enrollment
tpm2_pcrread sha256:0,2,4,7 | tee /mnt/external/pcr_pre_enrollment_$(date +%Y%m%d).txt
# Enroll TPM2
systemd-cryptenroll --tpm2-device=auto \
--tpm2-pcrs=0+2+4+7 /dev/nvme0n1p3
# Verify the new keyslot appears
cryptsetup luksDump /dev/nvme0n1p3 | grep -A5 "tpm2"
# Update /etc/crypttab to use TPM2 for automatic unlock
# Edit the line for your root device to add tpm2-device=auto:
# Example:
# luks-uuid UUID=your-uuid none tpm2-device=auto,discard
sudoedit /etc/crypttab
# Rebuild initramfs to include the TPM2 unlock hook
sudo update-initramfs -u -k all # Debian/Ubuntu
sudo dracut --force # Fedora- TPM2 keyslot enrolled
/etc/crypttabupdated- Initramfs rebuilt
2.2 Verify TPM2 unsealing works
bash
# Reboot — system should unlock without passphrase prompt
sudo reboot
# After reboot, confirm TPM2 was used for unlock
journalctl -b | grep -i "tpm\|cryptsetup\|unlocked"- System reboots and unlocks without passphrase
- Journal confirms TPM2-based unlock
2.3 Verify the sealing is actually enforcing — critical test
This test is not optional. It proves the PCR sealing is working rather than assuming it.
bash
# Make a trivial change to kernel cmdline
sudoedit /etc/default/grub
# Add a harmless parameter like 'quiet' or a comment — anything that changes the cmdline
sudo update-grub
# Reboot — system should NOW prompt for passphrase
sudo reboot
# If it unseals automatically despite the cmdline change,
# your PCR policy is not covering what you think it is
# Review which PCRs you sealed to and adjust- System correctly demands passphrase after cmdline modification
- Confirmed PCR sealing is enforcing
2.4 Restore correct configuration and re-seal
bash
# Revert the test cmdline change
sudoedit /etc/default/grub
sudo update-grub
# Remove the test TPM2 keyslot and re-enroll cleanly
systemd-cryptenroll --wipe-slot=tpm2 /dev/nvme0n1p3
systemd-cryptenroll --tpm2-device=auto \
--tpm2-pcrs=0+2+4+7 /dev/nvme0n1p3
# Reboot and confirm clean automatic unsealing
sudo reboot- Cmdline restored
- TPM2 keyslot re-enrolled cleanly
- System unseals automatically on correct configuration
- LUKS header backup updated:
cryptsetup luksHeaderBackup /dev/nvme0n1p3 --header-backup-file /mnt/external/luks_nvme0n1p3_post_tpm2_$(date +%Y%m%d).img
Phase 3 — Kernel Hardening
Phase 2 must be fully stable. Phases 3 changes are applied incrementally — one kernel parameter group per reboot cycle.
3.1 Enable kernel lockdown
bash
# Add to GRUB_CMDLINE_LINUX in /etc/default/grub
sudoedit /etc/default/grub
# Add: lockdown=integrity
sudo update-grub
sudo reboot
# Verify after reboot
cat /sys/kernel/security/lockdown
# Expected output: none [integrity] confidentiality
# The brackets indicate the active mode
# Check for any functionality breakage
dmesg | grep -i "lockdown\|locked down"lockdown=integrityactive and confirmed- No unexpected functionality loss
- TPM still unseals correctly (lockdown does not affect LUKS)
3.2 Enable module signing enforcement
bash
# Add to GRUB_CMDLINE_LINUX
# module.sig_enforce=1
sudo update-grub
# Before rebooting, verify all currently loaded modules are signed
for mod in $(lsmod | awk 'NR>1 {print $1}'); do
modinfo $mod | grep -q "sig_key" || echo "UNSIGNED: $mod"
done
# Any unsigned modules here will fail to load after enforcement
# Sign them with your MOK key or remove them before proceedingbash
sudo reboot
# Verify enforcement is active
dmesg | grep -i "module.*sig\|sig_enforce"
# Attempt to load an unsigned test module — should fail
# (use a known unsigned test module or simply verify dmesg behavior)module.sig_enforce=1active- No boot-critical unsigned modules
- All loaded modules verified signed
3.3 Enable strict IOMMU
bash
# Intel: add intel_iommu=on iommu=strict
# AMD: add amd_iommu=on iommu=strict
sudoedit /etc/default/grub
sudo update-grub
sudo reboot
# Verify IOMMU active and strict
dmesg | grep -iE "iommu|dmar" | grep -iE "enabled|strict|translated"
# Confirm GPU in correct IOMMU group
for group in /sys/kernel/iommu_groups/*/devices/*; do
echo "Group $(basename $(dirname $group)): \
$(lspci -nns ${group##*/} 2>/dev/null)"
done | sort -V | grep -iE "vga|display|3d"- IOMMU active and in strict mode
- GPU in appropriate IOMMU group
- No DMA-related errors in
dmesg
3.4 Configure sysctl security parameters
bash
# Create a security-focused sysctl configuration
sudo tee /etc/sysctl.d/99-hardening.conf << 'EOF'
# Restrict ptrace to parent processes only
kernel.yama.ptrace_scope = 2
# Restrict dmesg to root
kernel.dmesg_restrict = 1
# Restrict kernel pointer exposure
kernel.kptr_restrict = 2
# Disable core dumps for setuid processes
fs.suid_dumpable = 0
# Restrict perf to root
kernel.perf_event_paranoid = 3
# Network hardening
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.tcp_syncookies = 1
# Disable IPv6 if not in use
# net.ipv6.conf.all.disable_ipv6 = 1
# net.ipv6.conf.default.disable_ipv6 = 1
EOF
sudo sysctl --system
# Verify key parameters took effect
sysctl kernel.yama.ptrace_scope
sysctl kernel.dmesg_restrict- Sysctl parameters applied and verified
ptrace_scope=2confirmeddmesg_restrict=1confirmed
Phase 4 — Runtime Integrity
This phase has the highest footgun potential. Read Section 4.4 completely before starting. Have recovery USB ready and LUKS passphrase accessible.
4.1 Deploy IMA in measurement mode
bash
# Add to kernel cmdline — fix mode only, no enforcement yet
# ima_appraise=fix ima_policy=tcb ima_hash=sha256
sudoedit /etc/default/grub
sudo update-grub
sudo reboot
# Verify IMA is measuring
cat /sys/kernel/security/ima/ascii_runtime_measurements | head -5
# Should show growing measurement log entries
# Let the system run through complete normal workload
# At minimum: log in, launch all applications you normally use,
# run your typical workflows. The longer you run in fix mode
# the more complete the measurement database becomes.- IMA measurement mode active
- Measurement log populating (
wc -l /sys/kernel/security/ima/ascii_runtime_measurementsgrowing) - System run through complete normal workload in fix mode
- No unexpected access denials in fix mode
4.2 Verify measurement database before enforcement
bash
# Count measured files
wc -l /sys/kernel/security/ima/ascii_runtime_measurements
# Check for any files accessed that lack IMA signatures
# (these will be blocked in enforce mode)
find /usr /bin /sbin /lib /lib64 -type f -executable \
-exec getfattr -n security.ima {} \; 2>/dev/null | \
grep -c "security.ima"
# Compare counts — if many executables lack signatures,
# do not enable enforcement yet
# Run: sudo ima_sign_cmd or use evmctl to sign files
# evmctl package: sudo apt install evmctl4.3 Transition to IMA enforcement
Only proceed when confident the measurement database is complete:
bash
# Change kernel cmdline from ima_appraise=fix to ima_appraise=enforce
sudoedit /etc/default/grub
sudo update-grub
# Have recovery USB physically accessible before this reboot
sudo reboot
# If system boots normally, verify enforcement is active
dmesg | grep -i "ima.*enforce\|appraise"
# Test that a tampered binary is blocked
# (In a test environment — do not tamper production binaries)- IMA enforcement active
- System boots cleanly under enforcement
- All normal functionality intact under enforcement
- Recovery USB confirmed accessible in case of enforcement issues
Phase 5 — Network Hardening
Phase 5 is largely independent of the previous phases and can be done in parallel with Phase 3 if desired. It is listed last because getting the kernel hardening stable first means fewer variables when debugging network issues.
5.1 Deploy nftables default-deny
bash
# Back up current firewall rules before replacing
sudo nft list ruleset > /mnt/external/nftables_pre_hardening_$(date +%Y%m%d).txt
# Install the default-deny ruleset from Section 6.3
# Test in a second terminal before closing the first
# to avoid locking yourself out over SSH
sudo tee /etc/nftables.conf << 'EOF'
# [paste the ruleset from Section 6.3 here]
EOF
# Apply without rebooting first
sudo nft -f /etc/nftables.conf
# Verify connectivity before closing your terminal
ping -c 3 8.8.8.8
curl -s --max-time 5 https://example.com > /dev/null && echo "HTTPS working"bash
# Enable on boot
sudo systemctl enable nftables
sudo systemctl start nftables
# Review the drop log for unexpected blocks
journalctl -f | grep "nft-.*-drop"
# Add allow rules for anything legitimately blocked- nftables default-deny active
- All needed connectivity confirmed working
- No legitimate traffic being dropped
- nftables enabled on boot
5.2 Configure encrypted DNS
bash
# Configure systemd-resolved for DoT
sudo tee -a /etc/systemd/resolved.conf << 'EOF'
[Resolve]
DNS=1.1.1.1#cloudflare-dns.com 9.9.9.9#dns.quad9.net
DNSOverTLS=yes
DNSSEC=yes
FallbackDNS=
EOF
sudo systemctl restart systemd-resolved
# Verify DoT is active
resolvectl status | grep -iE "dns over tls|current dns|dnssec"- DoT active and confirmed
- DNSSEC validation enabled
- No fallback to cleartext resolvers
5.3 Audit listening services
bash
# Full inventory of listening services
ss -tlnpu
# Save as your known-good baseline
ss -tlnpu > /etc/security/listener_baseline_$(date +%Y%m%d).txt
# Review every entry — for each listening service ask:
# Does this need to be listening?
# Does it need to be on 0.0.0.0 or can it bind to loopback only?
# Is it covered by an AppArmor profile?
# Services that don't need network access: bind to 127.0.0.1
# Services that don't need to run: disable them
# sudo systemctl disable --now service-name- All listening services identified and justified
- Unnecessary services disabled
- Baseline saved for future comparison
5.4 Verify WireGuard tunnel configuration
bash
# Confirm WireGuard is running and connected
wg show
# Verify all traffic routes through tunnel
curl https://ifconfig.me
# Should return your VPN endpoint's IP, not your ISP's IP
# Confirm DNS is not leaking
# Use a DNS leak test — dnsleaktest.com or similar
# via curl: curl https://bash.ws/dnsleak/test/$(curl -s https://bash.ws/dnsleak) | python3 -m json.tool- WireGuard tunnel active and confirmed
- All traffic routing through tunnel
- No DNS leaks confirmed
Final Verification — The Consolidated Health Check
After all phases are complete, run this verification block to confirm the full stack is active:
bash
#!/bin/bash
echo "=== SECURITY STACK VERIFICATION ==="
echo ""
echo "--- Secure Boot ---"
mokutil --sb-state
echo ""
echo "--- TPM2 ---"
ls /dev/tpm* 2>/dev/null || echo "WARNING: No TPM device"
tpm2_pcrread sha256:0,2,4,7 > /dev/null 2>&1 \
&& echo "TPM2 responsive" || echo "WARNING: TPM2 not responding"
echo ""
echo "--- LUKS ---"
cryptsetup luksDump /dev/nvme0n1p3 | grep -E "Version:|PBKDF:|tpm2"
echo ""
echo "--- Kernel Lockdown ---"
cat /sys/kernel/security/lockdown
echo ""
echo "--- Module Signing ---"
cat /proc/sys/kernel/modules_disabled
dmesg | grep -c "module.*sig" || echo "0 module sig events"
echo ""
echo "--- IOMMU ---"
dmesg | grep -iE "iommu|dmar" | grep -iE "enabled|translated" | head -3
echo ""
echo "--- IMA ---"
dmesg | grep -i "ima" | head -5
wc -l /sys/kernel/security/ima/ascii_runtime_measurements
echo ""
echo "--- LSM Stack ---"
cat /sys/kernel/security/lsm
echo ""
echo "--- Firewall ---"
nft list ruleset | grep "policy" | head -5
echo ""
echo "--- DNS ---"
resolvectl status | grep -iE "dns over tls|current dns server" | head -3
echo ""
echo "--- Listening Services ---"
ss -tlnpu | grep -v "127.0.0.1\|::1"
echo "(above should be minimal — review any unexpected entries)"
echo ""
echo "--- Yama ptrace_scope ---"
cat /proc/sys/kernel/yama/ptrace_scopeSave this script as /usr/local/sbin/security-check and run it after any significant system change — kernel update, BIOS update, new software installation. The output should be stable and known-good between changes. Unexpected changes in the output are the first indicator that something in the security stack has shifted.
Conclusion
Security is not a configuration you apply once and forget. It is a way of reasoning about your system — a habit of asking the right questions when things change. When a kernel update lands: will the TPM still unseal, and have I re-sealed to the new PCR state? When a new application gets installed: what does it listen on, what does it write to disk, is it covered by an integrity policy? When you plug into an untrusted network: what namespace is this interface in, and what can reach it from the other side?
The architecture described in this article does not make your machine unbreakable. No architecture does. What it does is force an attacker to climb a steep and well-defended ladder rather than walking through an unlocked door. The automated attacks and opportunistic exploits that represent the overwhelming majority of real-world threats will not make it past the first few rungs. A drive-by browser exploit lands in a sandboxed process with no path to your storage keys. A compromised package cannot load an unsigned kernel module. A stolen drive is inert without the TPM that sealed its keys. Each of these outcomes represents a contained incident rather than a total compromise.
The honest ceiling is worth repeating one final time: on consumer hardware without hardware memory encryption, a kernel-level attacker with a working local exploit can still read process memory. That ceiling is real, it is a hardware limitation, and no amount of configuration closes it on current consumer silicon. The correct response is to make reaching that level the required escalation — two separate, non-trivial exploit capabilities chained together — rather than a consequence of any single vulnerability.
For the penetration tester: your machine should be harder to own than your clients' networks. The stack described here gets you there on hardware you can buy at any computer retailer, with software that is entirely open source, auditable, and maintained by people who care about getting it right.
For the gamer with a high-value machine: the threat model is different in specifics but identical in structure. Minimize what a compromised process can reach. Make escalation require real work. Ensure that the worst realistic outcome is a contained process compromise rather than loss of everything on the drive.
For the privacy-focused builder: the combination of TPM2-sealed storage, kernel integrity enforcement, and IMA/EVM runtime verification means that a remote attacker who achieves code execution on your machine finds themselves in a carefully bounded space — not the run of the house.
Start with Secure Boot re-enabled and a proper backup. Those two steps alone, implemented correctly and completely, put you ahead of the vast majority of self-built Linux machines connected to the internet today. Everything else in this article is the work of building outward from that foundation, one stable layer at a time.
The metal is already capable. Use it.
Member discussion: