Sam Altman
Sam Altman says words are dangerous. Ronan Farrow says Sam Altman is lying.
As Sam Altman denounces a New Yorker profile as 'incendiary' following an attack on his home, the documented evidence of his 'will to power' remains unrefuted.

On April 10, 2026, at approximately 3:45 AM, an incendiary device—a Molotov cocktail—was thrown at Sam Altman’s Russian Hill residence in San Francisco. According to reports from TechCrunch, the device bounced off the house, resulting in no injuries and minimal damage. Simultaneously, a suspect was arrested at OpenAI’s headquarters after allegedly threatening to set the building on fire, an incident confirmed by official records. These events occurred just four days after The New Yorker published an exhaustive, 18-month investigation by Ronan Farrow and Andrew Marantz titled "Sam Altman May Control Our Future—Can He Be Trusted?" Altman’s response was swift and rhetorically calibrated: he denounced the profile as "incendiary" and suggested that at a time of "great anxiety about AI," such reporting makes him a physical target.
Sam Altman is utilizing the "words as weapons" defense to pivot public discourse away from documented evidence of his systemic deception and toward a framework where investigative journalism is categorized as a physical security threat to AI leaders. By framing 70 pages of internal memos and 100+ source interviews as a form of incitement rather than a record of governance failure, Altman is attempting to establish a precedent where accountability is synonymous with endangerment. This strategic use of trauma seeks to delegitimize source-vetted reporting by conflating factual scrutiny with the criminal acts of unstable individuals.
The Profile and the Molotov Cocktail
The collision of high-stakes investigative journalism and real-world violence has provided Altman with a powerful, if cynical, shield. The New Yorker profile alleges a relentless will to power based on interviews with over 100 sources, including former board members, executives, and mentors. It paints a picture of a leader whose primary skill is not engineering or safety, but the manipulation of narratives to consolidate personal authority.
Altman’s rebuttal did not focus on factual corrections; those are for people with weaker narrative control. Instead, in a personal blog post, he wrote, "Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives." By linking the physical attack on his home to the "narrative" provided by Farrow and Marantz, Altman successfully shifted the conversation from his documented history of untruths to his status as a victim. The SFPD arrest of a suspect at OpenAI HQ, as detailed by The Hollywood Reporter, provided the necessary catalyst for this de-escalation plea, allowing Altman to argue that critical reporting has become a public safety hazard.
While physical security risks are undeniably real and the attack on Altman's home is a criminal act, the timing of his rhetorical pivot suggests a strategic attempt to shield himself from the fallout of the New Yorker investigation.
70 Pages of Mistrust
The "words" that Altman finds so dangerous are not merely adjectives; they are receipts. The profile brings to light what it calls "official-record" tier memos, most notably 70 pages of Slack messages and HR documents compiled by Ilya Sutskever in 2023. These documents allege that Altman exhibited a consistent pattern of lying to the board and senior staff to play factions against one another. Sutskever is quoted in The New Yorker as saying, "I don’t think Sam is the guy who should have his finger on the button."

This pattern of behavior is described by colleagues as the Ring of Power dynamic, a totalizing leadership philosophy where the pursuit of controlling AGI becomes the primary driver of organizational behavior, leading to the prioritization of personal authority over shared safety protocols. This dynamic is reportedly what led to the mass exit of researchers in 2020 to form Anthropic, a move prompted by deep-seated concerns over the erosion of institutional guardrails. According to internal notes, Dario and Daniela Amodei left because Altman allegedly bypassed the Merge and Assist clause.
This provision in the OpenAI charter legally binds the company to stop developing its own AGI and instead assist a rival project if that project is closer to achieving safe AGI. Bypassing such a clause suggests that the mission is secondary to who is leading it. This revelation aligns with reports from The Information regarding the internal friction that preceded the Anthropic split, highlighting a long-standing tension between Altman's commercial ambitions and the company's original safety mandate.
Perhaps most damaging is the revelation regarding Altman’s departure from Y Combinator in 2019. While the public narrative was one of a "mutual transition," Farrow and Marantz report that Paul Graham and other partners privately confirmed Altman was removed for "lying to us all the time." This documented history of deception, further corroborated by The Washington Post, suggests that the 2023 board firing—often dismissed by Altman's defenders as a rogue coup—was actually the culmination of a decade-long pattern of behavior.
The 'Words as Weapons' Defense
Altman and his defenders argue that the extreme polarization and AI anxiety surrounding AGI development mean that highly critical profiles of tech leaders act as dog whistles for unstable individuals, creating a direct causal link between media rhetoric and violence. They argue that in a world where AI is viewed as an existential threat, publishing a 70-page case for a leader's untrustworthiness is akin to painting a target on their back. For the pro-Altman camp, the media has a moral obligation to de-escalate rather than investigate.
However, this words as weapons defense fails to address the specific, documented instances of deception from Paul Graham and Ilya Sutskever. While physical security risks are real, using a home attack to delegitimize 18 months of source-vetted reporting conflates factual scrutiny with harassment. As TechCrunch notes, factual rebuttals have been conspicuously absent from Altman’s "de-escalation" plea. To accept Altman's framework is to accept that certain individuals are too important or too targeted to be investigated by the press, effectively creating an accountability-free zone for the world's most powerful tech leaders.
Governance in the Shadow of an IPO
The stakes of this rhetorical battle are tied to OpenAI’s upcoming IPO, which carries a potential $1 trillion valuation. At this scale, internal safety conflicts and allegations of deception are not just Shakespearean drama, as Altman calls them; they are material risks to investors. The survival of Altman after the 2023 board firing, which The Verge described as a sudden leadership transition following a lack of candor, reinforced a governance model where personal power outweighs board checks.
If Altman successfully frames criticism as "incitement," he effectively insulates himself from the very accountability structures meant to restrain the Ring of Power dynamic. The failure of the 2023 board to maintain Altman’s removal set a precedent: the CEO is larger than the mission. In the shadow of the IPO, there is a massive incentive to suppress internal safety conflicts. If the Merge and Assist clause can be bypassed with impunity, and if documented lying is rebranded as an "incendiary narrative," then OpenAI is no longer a mission-driven non-profit hybrid. It is a vehicle for a single leader's will to power.
The Power of Words vs. the Will to Power
The evidence presented in the New Yorker investigation and Altman’s subsequent response supports the claim that the "words as weapons" defense is a strategic pivot. Altman’s characterization of journalism as incendiary is an effective but cynical shield. It allows him to offer a generic apology—"I am a flawed person... trying to get a little better each year"—while ignoring the specific "receipts" logged by his peers and mentors over the last decade.
Ultimately, the evidence suggests that the primary danger to OpenAI isn't the power of words, but the will to power that consistently bypasses the safety and governance structures meant to restrain it. If the public and the board accept that investigating a leader's character is a threat to their life, they forfeit the right to demand the candor that OpenAI’s own charter requires. The "flawed person" defense works for a startup founder; it is insufficient for the steward of AGI. The real failure here isn't the Molotov cocktail, but the dismantling of the very guardrails designed to prevent one man from controlling the future of intelligence.