On April 7th, 2026, The New Yorker published what will likely be remembered as one of the most consequential pieces of investigative journalism in the history of the technology industry. Three days later, at approximately 4 in the morning, a 20-year-old man walked up to Sam Altman's home in San Francisco's North Beach neighborhood and threw an incendiary device at the exterior gate. The word Altman later chose to describe the article — in his own written response — was incendiary.
That is not a coincidence worth ignoring. It is also not the most important thing in this story. The most important thing is what was in the article, and what it means for anyone who lives in a world increasingly shaped by the technology that Altman's company is building.
The Investigation
The piece was reported and written by Ronan Farrow and Andrew Marantz for The New Yorker. Farrow is a Pulitzer Prize-winning journalist who broke the Harvey Weinstein story — the investigation that sent one of Hollywood's most powerful men to prison. He does not publish sloppy work. His pieces are heavily sourced, document-driven, and built to withstand scrutiny. What he and Marantz constructed here is exceptional even by those standards: over one hundred interviews, drawing on former executives, engineers, board members, investors, and current insiders, anchored by two sets of primary source documents that had never been made public before.
The first is a roughly 70-page internal memo compiled by Ilya Sutskever, OpenAI's former chief scientist and one of the most consequential AI researchers of his generation — a man who, before joining OpenAI, co-invented foundational breakthroughs in modern deep learning. The second is over 200 pages of private personal notes kept by Dario Amodei, OpenAI's former VP of Research, who left in 2021 to found Anthropic.
Neither document was ever meant to see the light of day. Both paint the same picture.
The Ilya Memos
Sutskever spent weeks in the fall of 2023 compiling a dossier on his own CEO. He gathered Slack messages, pulled HR documents, wrote analyses — and he did it on his personal phone, not his work devices, reportedly because he was afraid the company was monitoring his work hardware and would discover what he was doing. He sent the finished 70-page document to three board members as messages set to auto-delete.
This is not the behavior of someone having a petty disagreement with their manager. This is the behavior of someone who believed they were taking a genuine personal risk in documenting what they had observed.
The memo opens with a heading: Sam exhibits a consistent pattern of — and the first item on the list is a single word: lying. Not "miscommunication." Not "optimistic projections." Lying. The document goes on to allege specific instances — that Altman misrepresented facts to executives and board members, and that he deceived colleagues about internal safety protocols.
One example from the memo is particularly difficult to dismiss. When GPT-4 was being prepared for release, the model required a formal safety approval process — a sign-off mechanism that exists specifically so that powerful AI systems are not deployed into the world without documented evaluation. According to the investigation, Altman told OpenAI's then-Chief Technology Officer, Mira Murati, that the safety approvals had been handled through the company's general counsel. Murati went directly to the general counsel to confirm. The general counsel's response, as reported by The New Yorker, was that he was confused where Sam got that impression.
Not: there was a misunderstanding. Not: the process was complicated. He was confused where Sam got that impression. The safety approval for one of the most powerful AI models ever built at the time had been misrepresented to the person responsible for overseeing it. If that framing is accurate, the entire safety architecture collapses at the point of trust — because the architecture only functions if the information flowing through it is honest.
The Ilya memos became, according to the investigation, legendary in certain circles. They were referred to simply as the Ilya memos.
The Dario Notes
Dario Amodei's 200-plus pages of private notes tell a more specific story about a single confrontation, and it is, arguably, the most damning episode in the entire investigation.
In 2019, OpenAI was negotiating a billion-dollar investment deal with Microsoft — a deal that would transform the organization from a scrappy nonprofit AI lab into a capitalized company with real corporate structure. Before the deal closed, Amodei had fought hard to include what he called a merge and assist clause in OpenAI's founding charter. The logic behind it was simple: if OpenAI's mission was genuinely to benefit humanity, then it should be obligated to stop competing with — and instead help — any other organization that got closer to building AGI safely first. You don't get credit for the mission while also playing corporate hardball when the stakes are the future of intelligence itself.
This was Amodei's non-negotiable condition. His number one safety demand going into the Microsoft deal.
The deal closed. The ink dried. Amodei read through the final contract and found a clause that had not been in the version he agreed to — a provision giving Microsoft effective veto power over any merger or acquisition involving OpenAI. In practice, this meant that if OpenAI ever tried to invoke the merge and assist provision — ever tried to stop racing a safer competitor and help them instead — Microsoft could block it. The clause Amodei had spent political capital fighting for had been quietly made unenforceable without his knowledge.
He went to Altman and confronted him directly, contract in hand, reading the specific clause aloud. According to Amodei's private notes, Altman denied the clause existed — while Amodei was reading it to him from the signed document, word for word.
Amodei's private conclusion, written in those notes and never meant to be published: 80% of the charter was just betrayed. His final assessment of the situation, after years of watching it unfold: The problem with OpenAI is Sam himself.
The Pattern That Predates All of This
What makes the New Yorker investigation structurally different from a conventional CEO-misbehaves story is that it documents a pattern that does not begin at OpenAI. It follows Altman.
Before OpenAI, Altman ran Y Combinator from 2014 to 2019 — the most prestigious startup accelerator in Silicon Valley. According to the investigation, by 2018, multiple YC partners had independently brought complaints about Altman to founder Paul Graham. Graham's reported conclusion, shared with YC colleagues, was that Altman had been lying to them the whole time. Altman has said publicly, including in sworn depositions, that he chose to leave YC voluntarily. Multiple YC partners and founders told The New Yorker that account is not accurate — that he was effectively pushed out, and that the clean public narrative was itself part of the pattern.
Before Y Combinator, there was Loopt — Altman's first startup, a location-sharing app. Employees there asked the board to remove him as CEO over concerns about his transparency and truthfulness.
Three organizations. Three eras. Three completely separate sets of colleagues with no reason to coordinate. The same complaint.
Aaron Swartz — the legendary programmer and internet activist, co-creator of RSS at age 14, co-builder of Reddit, champion of open access to academic research, and one of the most genuinely respected figures in tech history before his death in 2013 — was in the same Y Combinator cohort as Altman in 2005. According to the investigation, Swartz warned friends about Altman even then, saying: You need to understand that Sam can never be trusted. He is a sociopath. He would do anything.
This was twenty years ago, before OpenAI existed, before ChatGPT, before any of the billions of dollars or the geopolitical influence. A peer who knew him at the start of his career, who had no institutional axe to grind, and who is now dead and cannot be accused of having a motive.
A Microsoft executive — someone with direct financial incentive to remain charitable toward Altman — told The New Yorker that there is a small but real chance Altman is eventually remembered as a Bernie Madoff or Sam Bankman-Fried level scammer. Both are convicted felons. An investor-side executive put his name in a sentence with those two men voluntarily.
November 2023 and the Board That Tried
In November 2023, OpenAI's board of directors fired Sam Altman. The announcement was four sentences. It said he had not been consistently candid in his communications with the board.
What The New Yorker documents, with the memos as its foundation, is that this firing was the culmination of exactly what Sutskever had compiled — the board had been presented with 70 pages of documented concerns plus the accumulated observations of multiple senior people, and a group of board members had concluded that Altman's role had entrusted him with the future of humanity, but he could not be trusted.
What happened next was extraordinary. Altman, according to the investigation, immediately set up a war room at his San Francisco home. He hired a crisis communications adviser. He ran a coordinated campaign leveraging aligned investors, working media contacts, and driving an employee letter that applied maximum pressure from every direction. Within five days, he was back in his role. The board members who had voted to fire him lost their seats and were replaced with Altman allies — including economist Larry Summers and Brett Taylor, who became chairman. The formal investigation into the allegations that had precipitated the firing produced no written report. Its conclusions were never disclosed.
An investigation into serious allegations about the CEO of the company that has positioned itself as the responsible steward of the most consequential technology in human history produced no written report. Nothing documented. Nothing disclosed.
What Remained After
Following the reinstatement, the New Yorker documents a series of structural dismantlements. The super alignment team — a dedicated research group whose entire function was working on the long-term safety problem of ensuring that powerful AI systems do not go catastrophically wrong — had been promised 20% of the company's computing resources. In practice, according to sources in the investigation, they received a small fraction of that and were working on outdated hardware. The team has since been largely disbanded. Several of its most senior members have left. The merge and assist clause that Amodei fought for remains functionally neutralized. OpenAI converted from nonprofit to for-profit, eliminating the foundational structural design that was supposed to prevent profit motive from overriding the safety mission. Current insiders describe the company charter as something that no longer guides the company's behavior.
Hours after the New Yorker piece published, OpenAI announced a new safety fellowship program.
The Response That Said Nothing
In his public response to the investigation and the attack, Altman acknowledged being "conflict averse" and said that tendency had caused pain. He said he could identify things he was proud of and "a bunch of mistakes." He described himself as a flawed person in the center of an exceptionally complex situation. He compared the dynamics of AI development to the Lord of the Rings — the ring of power that makes people do crazy things — and proposed that power be shared broadly so no single person holds the ring.
He did not address the Ilya memos. He did not address the Dario confrontation. He did not explain the GPT-4 safety approval misrepresentation. He did not address Paul Graham's reported conclusion about his time at Y Combinator. He did not address the Microsoft executive's Madoff comparison.
He described The New Yorker piece as incendiary.
Why It Matters
OpenAI was not built like a normal company. It was structured specifically, deliberately, by design to answer a specific question: how do you develop potentially the most dangerous technology in human history responsibly? The answer they gave was: you build it as a nonprofit, you put mission above profit, you create a board with real authority, you build in safeguards, and you take safety seriously as a structural commitment, not a marketing position.
Every regulatory framework being proposed for AI, every international agreement being discussed, every policy conversation about governing this technology, operates on an assumption — that there are people in the industry who actually mean what they say about safety. That some of them can be held to their stated commitments.
If the person with more influence over AI's trajectory than almost anyone else on the planet has a documented pattern, across three organizations and two decades, of saying one thing and doing another — documented by the people who were closest to him, in private notes never meant to be published, compiled at personal professional risk — that is not a leadership scandal at one company. That is a question about whether responsible AI development is even a real category, or whether it has always been, at the most powerful institution in the space, the story we told ourselves to feel safe.
That question belongs to everyone who uses these tools, who works in industries being reshaped by them, who has children growing up with them. You don't have to work in tech for the character of the people making these decisions to matter.
Sources
Primary Investigation
- Ronan Farrow and Andrew Marantz, The New Yorker, published April 7, 2026. The full piece is available at newyorker.com and is worth reading in its entirety.
Primary Source Documents Referenced in the Investigation
- The Ilya Memos: An approximately 70-page internal dossier compiled by Ilya Sutskever, OpenAI's former Chief Scientist, sent to three board members as disappearing messages in fall 2023.
- The Dario Notes: Over 200 pages of private personal notes kept by Dario Amodei, OpenAI's former VP of Research, documenting his observations during his tenure at OpenAI (2016–2021).
OpenAI Founding Documents
- The OpenAI Charter, the organization's foundational governing document, referenced throughout the investigation regarding the merge and assist provision and subsequent departures from its stated commitments.
Related Reading
- Karen Hao, Empire of AI — cited in the video commentary as having covered significant ground on Altman's behavior prior to the New Yorker investigation.
Key Figures Named in the Investigation
- Sam Altman, CEO, OpenAI
- Ilya Sutskever, former Chief Scientist, OpenAI; co-founder, Safe Superintelligence Inc.
- Dario Amodei, former VP of Research, OpenAI; CEO and co-founder, Anthropic
- Mira Murati, former Chief Technology Officer, OpenAI
- Paul Graham, founder, Y Combinator
- Aaron Swartz (1986–2013), programmer, activist, and early internet architect
- Larry Summers, economist; OpenAI board member (post-reinstatement)
- Brett Taylor, OpenAI board chairman (post-reinstatement)
The Attack
- San Francisco Police Department responded on April 10, 2026, to an incendiary device thrown at Altman's North Beach residence and a subsequent threat at OpenAI's San Francisco headquarters. The FBI confirmed awareness and coordination with SFPD.
Sam Altman's Response
- Published on Altman's personal blog following the attack. No direct URL provided in source material.
Violence is not accountability. Throwing an incendiary device at someone's home is a crime regardless of one's views on the person inside it. The arrest in this case was appropriate. None of the above reporting changes that assessment.
Jonathan Brown for Border Cyber Group
Member discussion: