It was a Tuesday afternoon in early spring, and a security researcher had been staring at the same function for three hours. Not because it was complicated. Because it was simple — embarrassingly, almost insultingly simple — and it was running as SYSTEM on every version of Windows shipped in the last fifteen years.
He'd found it the boring way. An MSRC advisory had crossed his feed — a patched privilege escalation in a Windows component he didn't recognize. Instead of moving on, he looked it up. The component dated to NT 3.51. The interface it exposed hadn't been substantially modified since 1996. He pulled the binary, loaded it in IDA with Microsoft's public symbols, and started reading.
The code was, in a word, archaeological. Hand-rolled string operations. No bounds-checked variants anywhere. SAL annotations: absent. The unmistakable fingerprints of software written by people who were very good at what they did, in an era when the threat model looked nothing like today's — when the assumption was that the dangerous person was on the other side of a network connection, not sitting in your own process tree at medium integrity.
Three hours in, he found a path where caller-supplied data flowed into a privileged operation with no meaningful validation of who the caller actually was. The original authors hadn't checked because in 1996, that wasn't the question you asked. The code had survived every audit, every Patch Tuesday, every security initiative for nearly three decades — not because it was secure, but because nobody had looked at it with the right question in mind.
The bounty cleared in the mid five figures.
He isn't exceptional. He's methodical. And the method isn't exotic: when you find a fresh primitive in Windows, you don't go looking in the shiny new code. You go looking in the dark corners where the janitor stopped coming years ago. That's where the bodies are buried. That's where they stay.
This post is a map to those corners — what they are, why they exist, how to find them, and how to think about what you're doing when you're standing in one. It won't make you a seasoned vulnerability researcher overnight. But it might put you in the right room.
Why Windows Is an Archaeological Site
Windows is not a product. It's a stratum. Thirty-plus years of features, subsystems, protocols, and design decisions compressed into a single operating system that is contractually, culturally, and commercially obligated to run software written before some of its current users were born. That obligation — Microsoft's famous backward compatibility religion — is the source of extraordinary business value and extraordinary attack surface in roughly equal measure.
The theology is simple: your 2003 enterprise application will run on Windows 11. Your decade-old printer driver will load. The COM object your accounting department's legacy software depends on will be there, registered, accessible, doing its thing. Microsoft has paid an enormous security price to keep that promise, and they will keep paying it, because the alternative — breaking the ecosystem — is commercially unthinkable.
Linux can break userspace. Linus gets angry letters, distributions scramble for a patch cycle, and life goes on. When the security case is strong enough, dangerous legacy code gets excised. Windows largely cannot make that choice. The code stays. The interfaces stay. The attack surface stays.
The Sedimentary Model
Think of the Windows codebase as geological strata rather than software releases. NT 3.1 at the bottom — 1993, designed when "the network attacker" was essentially the entire threat model. NT 4.0 above it, where the graphics subsystem was pulled into kernel space for performance, a decision that created one of the densest kernel attack surfaces in any production OS. Then Windows 2000, XP, Vista, 7, 8, 10, 11 — each layer adding mitigations, new abstractions, new security features, while the layers beneath remain largely intact.
ASLR arrived in Vista (2007). Stack cookies became standard SDL practice around the same era. DEP, safe exception handlers, integrity levels — all post-2004. Which means anything written before that period was authored without those guardrails, and a significant fraction of it is still running, still reachable, still doing what it was told to do in an era when nobody was asking the questions we're asking now.
The attack surface is not uniformly dense. It has strata. And the deepest strata are the most interesting — old code, old assumptions, old threat models. A researcher who understands this isn't working harder than one who doesn't. They're working in a fundamentally different direction.
What This Post Covers
The focus here is Windows client and server attack surface reachable from a local unprivileged or low-privilege context — the scenario that defines most real-world post-initial-access privilege escalation, and the scenario most directly relevant to bug bounty work against Microsoft's own platforms.
This is not a browser exploitation guide. It's not about third-party driver bugs, which are a vendor problem. It's not about social engineering. It assumes you can read C, you understand the Windows process and privilege model at a functional level, and you've stood up a Windows VM and run Process Monitor without someone holding your hand. This is not zero-to-hero material. It's the conversation that comes after the basics — the one about where to look and how to think before you start pulling on threads.
With that framing established: let's go underground.
Mapping the Terrain
Seven categories. Not an exhaustive taxonomy — a prioritized field guide. For each one: what it is, why it's still there, and what class of problem it reliably produces. Exploitation technique comes later. Right now the goal is pattern recognition — learning to identify which dark corners are worth the flashlight.
COM and DCOM: The Original Sin
Component Object Model dates to 1993. DCOM added network transparency shortly after. Together they became the connective tissue of Windows — the mechanism by which processes expose interfaces to each other, by which shell extensions load, by which UAC elevation dialogs work, by which Windows Update, BITS, Task Scheduler, WMI, and approximately everything else communicates. You cannot remove COM from Windows. COM is Windows at a certain level of abstraction.
The security model layered onto COM was not designed for the threat model we operate in today. DCOM endpoints have launch permissions and access permissions — ACLs that govern who can activate and call a COM server. When those ACLs are permissive, and when the COM server runs as SYSTEM or at high integrity, you have a potential privilege escalation primitive: an unprivileged caller invoking a method on a process that will perform privileged operations on their behalf.
The Potato family of exploits — Juicy Potato, Rogue Potato, Sweet Potato and its descendants — all live in this neighborhood. They are not exotic. They are the product of systematically enumerating DCOM endpoints, checking their permissions, and asking what happens when you call them in ways their authors didn't anticipate.
The tool for this work is OleViewDotNet, written by James Forshaw. Learn it. It will show you every registered COM server, its launch and access permissions, and whether it runs elevated. The attack surface it exposes is, to put it gently, extensive.
Win32k: When the Graphics Subsystem Moved Into the Kernel
In 1996, NT 4.0 moved the graphical subsystem — GDI, USER, the window manager — from user mode into kernel space. The rationale was performance. The consequence was one of the largest kernel attack surfaces in any production operating system, and it has never been fully reversed.
Win32k.sys exposes hundreds of syscalls to user-mode processes. For years it was the single most bug-dense component in Windows — a reliable producer of kernel-level privilege escalation vulnerabilities. Microsoft has progressively restricted Win32k syscall access from sandboxed contexts (Edge renderer, Chrome renderer) via syscall filtering policies, which is meaningful. But a normal medium-integrity process — which is where you are after most initial access scenarios — retains full access.
The productive research areas here: window message handling in kernel context, GDI object type confusion, handle table manipulation through USER objects. The code is old, it's complex, and the audit surface is effectively inexhaustible for a solo researcher with patience.
The Registry: Global Mutable State as Attack Surface
The Windows Registry is a hierarchical, persistent, globally accessible key-value store that governs essentially all Windows behavior — application settings, COM registration, service configuration, shell extensions, security policy, file type associations. It is, from a security standpoint, a vast landscape of potential privilege escalation primitives waiting to be found by anyone who asks the right question: does any high-privilege process read from a location I can write to, and does it use that value in a security-sensitive way?
The most productive version of this question involves HKCU — the current user hive, always writable — and the conditions under which elevated processes fall back to HKCU lookups. It involves AppPaths and shell verb registration, where HKCU\Software\Classes lets unprivileged users override COM class registrations that elevated processes may invoke. It involves legacy software that loosened HKLM ACLs during installation and never tightened them.
The tooling is unglamorous but effective: Process Monitor filtered on registry operations for your target process, AccessChk for permission enumeration, PowerShell for systematic ACL auditing. The work is methodical. So are the results.
Named Pipes and ALPC: The Impersonation Surface
Named pipes and ALPC (Advanced Local Procedure Call) are how Windows processes talk to each other locally. Many critical system services expose these interfaces — LSA, LSASS, the Print Spooler, and dozens of others. The security implication is straightforward: if a privileged process connects to a pipe you control, you can call ImpersonateNamedPipeClient() and borrow their token.
The entire discipline of coercing privileged services into connecting to attacker-controlled pipes is one of the most productive areas of Windows LPE research of the last decade. The Potato exploits live here too — the specific mechanism varies but the primitive is consistent: find a service you can trick into authenticating to you, impersonate the result.
ALPC endpoints are less commonly audited than named pipes and correspondingly more interesting. Many Windows RPC interfaces are exposed over ncalrpc — local RPC over ALPC — and the enumeration tooling (NtObjectManager, also Forshaw) is less well-known than Process Monitor. Gaps in research coverage are opportunities.
The Print Spooler: A Case Study
The Print Spooler deserves specific attention because it is the single best recent illustration of this post's central argument.
The Spooler is ancient code. It runs as SYSTEM. It exposes a large, old RPC interface — MS-RPRN — that was historically network-accessible. PrintNightmare (CVE-2021-1675 and CVE-2021-34527) and its follow-on variants weren't found by researchers doing exotic things. They were found by people who read the Microsoft protocol specification for MS-RPRN — publicly available, free, essentially reverse-engineered interface documentation provided by Microsoft themselves — enumerated the RPC methods it exposed, and asked what happened when you called them with attacker-controlled arguments.
The code paths involved had existed for over twenty years. They hadn't been seriously audited in an adversarial context. When the first variant was patched, researchers immediately found others in the surrounding code — because the patch fixed the specific reported path, not the underlying class of issue.
This is the pattern. Read the spec. Enumerate the interface. Call the old methods. Look at what the patch didn't touch.
Legacy Protocol Handlers: The Shell Integration Attack Surface
URI scheme handlers registered in Windows — ms-officecmd://, search-ms://, and a long list of others — are shell integration points designed before the browser was a meaningful threat surface. They allow registered applications to be invoked by crafted URIs, including from web contexts.
The attack pattern is consistent: a web page crafts a link to a registered custom URI scheme; the browser passes the URI to the registered Windows handler; the handler, being ancient code, performs a network lookup (NTLM hash capture via UNC path), executes a local binary, or passes attacker-controlled data to a legacy parser that was never designed to handle hostile input.
The search-ms:// and ms-officecmd:// vectors that circulated in 2022-2023 were not new classes of vulnerability. They were new instances of a class that has been known since the late 1990s. That should tell you something about the rate of clearance in this particular corner.
Night Hunting
Knowing the terrain is necessary. It isn't sufficient. The researchers who consistently find things have internalized a methodology — a set of habits that turn "I'm looking at old Windows code" into "I have a reproducible proof of concept." Here's the skeleton of that methodology.
Read the Advisories Backward
The MSRC advisory archive is the most underutilized research resource in Windows bug bounty. Every published CVE touching a Windows component is a signal: this component was under adversarial scrutiny, a vulnerability was found, and the surrounding code almost certainly wasn't exhaustively audited in the same pass.
The workflow is straightforward. Find a component you want to target. Pull every MSRC advisory that mentions it. Read what was patched — not just the summary, but the actual binary diff if you can get it (BinDiff and Diaphora are your tools here). Identify the code pattern that was vulnerable. Then audit the surrounding code for the same pattern, for related patterns, and for conditions the patch addressed incompletely.
This last point deserves emphasis: the majority of new Windows vulnerabilities are variants of previously patched issues. The patch was correct but narrow — it fixed the reported path, not the underlying class. The Print Spooler saga played this out publicly across eighteen months. It wasn't unusual. It's the norm.
Enumerate Before You Audit
Static analysis is expensive. Enumeration is cheap. Before you open IDA, understand what you're actually looking at — the full shape of the attack surface your target exposes.
For COM targets: OleViewDotNet, enumerate all elevated servers, check every ACL. For RPC targets: RPCView or NtObjectManager, map the interface, count the methods, cross-reference against the Microsoft protocol specification if one exists. For registry targets: Process Monitor filtered on your target process's registry operations, AccessChk for ACL enumeration on any interesting keys. For pipe targets: PipeList, then Process Monitor filtered on pipe creation events.
The goal at this stage is triage, not exploitation. You're building a map of entry points. Most of them will be dead ends. A small number will be worth the static analysis investment. Enumeration tells you which are which before you've spent a week in a disassembler.
Read the Code Like an Anthropologist
When you do open the disassembler, bring the right question. Not "is there a buffer overflow here" — at least not first. The first question is: who wrote this, when, and what were they worried about?
Old Windows code has recognizable fingerprints. Hand-rolled string operations rather than bounds-checked variants. Manual integer arithmetic for buffer sizing. Absent SAL annotations. No use of modern safe APIs. These aren't bugs — they're indicators that you're standing in old territory, written before the SDL, written by people whose threat model didn't include a local attacker with medium-integrity access and the patience to enumerate DCOM endpoints on a Tuesday afternoon.
Once you've established that you're in old territory, the productive question becomes: where does this code make decisions based on caller-supplied data, and does it validate who the caller is before doing so? The specific vulnerability class — type confusion, integer overflow, missing access check, path traversal — matters less at the reconnaissance stage than finding the places where the old code and the modern threat model diverge. That divergence is where the bugs live.
Fuzzing: Informed, Not Blind
Blind fuzzing on complex Windows interfaces is largely a waste of time. The interfaces are stateful, they validate input formats before reaching interesting code, and a lot of the low-hanging memory corruption in them was found years ago by people running WinAFL against well-understood input parsers.
Informed fuzzing is different. You've read the spec or the disassembly, you understand the valid protocol space, you've identified the interesting code paths manually — now you build a thin harness that exercises those paths specifically with mutated inputs. WinAFL for file or network input targets. NtObjectManager-based PowerShell harnesses for RPC method fuzzing. WinDbg with Time Travel Debugging for reproducing and analyzing crashes. The investment in understanding the target first is what makes the fuzzing produce results rather than noise.
The Underlying Discipline
All of this reduces to a single cognitive habit: ask the question the original authors weren't asking. They were competent engineers working within the threat model of their time. That threat model is no longer accurate. The gap between what they assumed and what is actually true in a modern Windows deployment is where the attack surface lives. Your job is to find the places where that gap is widest.
Ha — occupational hazard. A writer telling you he won't pad the ending is perhaps the least reassuring thing he could say.
The MSRC Bug Bounty Process — Practical Notes
A few things the tutorials don't tell you.
Submit early. An incomplete report with clear vulnerability evidence submitted before a patch is worth more than a polished exploit submitted after someone else got there first. Collision — independently discovering a bug that's already been reported — is common on high-traffic attack surfaces, and MSRC generally doesn't pay for duplicates. If you have the primitive and credible evidence of impact, file it. Polish the write-up while you wait for triage response.
Understand the severity calculus before you start. The gap between Important and Critical on MSRC's scale isn't semantic — it's financial. Critical requires no user interaction and no special configurations. Many legacy-surface LPE bugs land as Important, which is still meaningful but commands a lower payout. Know where your bug is likely to land before you've spent three weeks weaponizing it.
Prove the impact. "This function looks vulnerable" is not a submission. MSRC needs a reproducible proof of concept that crosses a privilege boundary or achieves the claimed impact on a current, fully-patched system. Version specificity matters — test on the latest release. If you can demonstrate impact across multiple Windows versions, say so explicitly.
And if a bug you found gets patched and you're listed in the acknowledgments but missed the bounty window — go read the advisory carefully. Then audit the surrounding code. The variant is often still there.
Getting There From Here
Two books. Windows Internals (Russinovich et al., 7th edition) — the structural foundation without which nothing else makes sense. Windows Security Internals (Forshaw, No Starch Press) — the adversarial lens applied to that foundation. Read them in that order.
After that, the Project Zero blog is a free graduate education. Every Forshaw post on Windows internals is mandatory. Don't skim them. Work through the code.
For practical orientation: stand up a clean Windows 11 VM, install OleViewDotNet, and enumerate every elevated DCOM server on the system. For each one: check its launch and access permissions, look up its ProgID, search it in the MSRC advisory archive, and read any related research. Don't rush it. This exercise alone — done thoroughly — will take twenty or more hours and teach you more about the real shape of the Windows attack surface than any structured course.
The path from that exercise to a filed bounty is measured in months for most people, and in years for the kind of depth that produces consistent results. That's not discouraging — it's just accurate. The researchers who do this well aren't smarter than you. They've simply spent more hours asking the right questions in the right rooms.
The Janitor Isn't Coming
The attack surface documented in this post isn't shrinking. The backward compatibility religion ensures that legacy code paths will be maintained, patched individually when specific bugs surface, but never fundamentally rearchitected. New features accumulate above the old strata. The old strata remain.
For researchers willing to develop the patience of the archaeologist — reading old code, reconstructing old threat models, tracing call chains from modern interfaces down into 1996-era implementations — Windows remains one of the most target-rich environments in existence. The reading list is long. The tools take time to master. The work is often unglamorous.
But somewhere in a code path that hasn't been seriously reviewed since the Clinton administration, running as SYSTEM on a fully patched Windows 11 machine, there may be something that nobody has looked at with the right question in mind.
The janitor isn't coming.
Member discussion: