propaganda
A 10-person Iranian team used AI to make Lego propaganda. Millions of Americans tuned in.
How a small Iranian media team used AI-generated Lego videos to spread state-backed geopolitical propaganda and successfully out-message the White House.
Traditional state propaganda usually involves heavily curated broadcast networks, sterile press releases, and multi-million dollar budgets. It is easy to identify, easy to mock, and incredibly easy for foreign audiences to ignore. But in the early months of 2026, the information war shifted away from mahogany news desks and toward something altogether more ridiculous: AI-generated plastic toy bricks.
A small operation known as Explosive Media—an Iranian media team of fewer than 10 people responsible for producing viral AI-generated Lego-style propaganda videos—managed to out-message traditional geopolitical operations. By dropping the viral "TACO" video to American audiences, they proved a grim new reality for digital literacy. The integration of accessible AI generation tools and universally recognized aesthetics like Lego has allowed state-backed actors to bypass traditional media literacy defenses and achieve broader reach than conventional geopolitical propaganda.
1. Incident Summary: The TACO Protocol
In April 2026, amid heightened military tensions, a video materialized across Western social media feeds. The short, animated clip featured a Lego-style rendition of Donald Trump holding a white flag and eating a taco. The imagery was a crude reference to the acronym "Trump always chickens out" (TACO), a narrative pushed immediately following a real-world presidential announcement regarding troop deployments. The video subsequently garnered millions of views across X and Telegram, successfully seeding a specific political narrative into the digital mainstream WIRED.
To understand the absurdity of this moment, you have to consider the traditional mechanics of psychological operations. Historically, planting a foreign narrative in the American media ecosystem required complex intelligence networks, front companies, and carefully laundered talking points. Today, it appears to require a mid-tier graphics card and a prompt asking an image generator to render geopolitical capitulation in the style of Danish plastic building blocks. The video did not try to look real; it tried to look native to the internet.
This was not an isolated incident of organic internet humor. It was a calculated drop by Explosive Media. The video featured an original English-language rap track WIRED, leaning heavily into established conspiracy tropes. These included visual references depicting Trump launching military strikes specifically to distract from his association with the "Epstein Files" Yahoo News.
The choice of visual medium was deliberate and highly calculated. The creators specifically chose Lego-style graphics because they view it as a "world language," a universally understood aesthetic that smooths over the friction of cross-cultural communication Yahoo News. It disarms the viewer. When you see a plastic toy figure bobbing to a rap beat, your critical defenses are lowered. You are not evaluating a state broadcast; you are consuming a meme.
The strategy worked precisely as intended. It successfully pushed a pro-Iran narrative directly to American audiences under the guise of an animated short WIRED. This approach demonstrates a keen understanding of algorithmic prioritization. Social media platforms reward engagement, and nothing drives engagement like familiar aesthetics wrapped around highly polarized political figures.
2. Timeline: The Evolution from Organic Shitposting to State Sponsorship
The timeline of Explosive Media's output tracks directly with the escalation of the 2026 conflict. Since the war began in February 2026, the small outfit has published over a dozen viral videos, all AI-generated Lego-inspired content mocking the United States and its leadership WIRED. These were not sporadic drops. They were sequenced, topical, and clearly designed to capitalize on specific news cycles.
In the early weeks, the content focused mostly on generic anti-Western sentiment. The videos featured clumsy rhymes and slight aesthetic inconsistencies that are the hallmark of early AI generation. But as the conflict intensified, so did the production value. By March, the videos were featuring complex multi-character scenes, synced audio tracks, and highly specific references to American domestic politics. The rapid improvement in quality suggests a significant infusion of resources and dedicated computing power.
Initially, the origin of these videos was safely shrouded in the standard obfuscation of internet meme pages. They were posted from anonymous accounts, amplified by bot networks, and quickly absorbed into the broader ecosystem of political shitposting. For a solid two months, they enjoyed the ultimate digital camouflage: appearing as just another weird corner of the internet.
But the facade cracked in April 2026 during an interview on the BBC's Top Comment podcast. A representative for the group, operating under the pseudonym "Mr Explosive," explicitly contradicted earlier claims of grassroots independence. During the broadcast, he admitted that the Iranian government is a "customer" of their media outlet Yahoo News.
This admission formally exposed the direct financial and strategic link between the viral AI meme campaigns and Iranian state direction. It transformed what looked like organic internet culture into documented state propaganda, albeit dressed in primary colors and blocky plastic. It also highlighted a structural weakness in open-source intelligence analysis: the tendency to assume that amateur aesthetics indicate amateur origins. Sometimes, the low-fi look is the entire point.
3. Root Cause Analysis: The Mechanics and Economics of Slopaganda
To understand why this approach works, we have to look at the mechanics of what is increasingly being categorized as slopaganda. Slopaganda is a portmanteau of 'slop' (low-quality, mass-produced AI content) and 'propaganda', used to describe AI-generated content designed for political messaging and disinformation. Experts note the term may understate how powerful and highly sophisticated these campaigns can be.
The fundamental root cause of this incident's success is the democratization of AI video generation combined with a deliberate strategy to crowdsource cultural context. Explosive Media is a tiny operation, yet it manages to out-meme defense departments. They achieve this not through sophisticated intelligence gathering, but by letting American audiences do the localization work for them.
"We’ve committed ourselves to learning more every day about American people and culture," one Explosive Media team member noted. "In this process, Americans themselves have been helping us—and that support and guidance continues" WIRED. This crowdsourced localization is the true innovation here. Instead of trying to guess what will go viral in the West, they monitor American social media, identify trending grievances, and feed those exact grievances into an AI generator.
The economics of slopaganda represent a drastic departure from previous disinformation efforts. A traditional troll farm requires hundreds of salaried employees working in shifts to manually generate content and manage fake personas. Slopaganda requires only a handful of skilled prompt engineers and enough compute power to iterate rapidly. This asymmetric cost structure allows small actors to flood the zone with content until something inevitably hits the algorithmic jackpot.
Currently, no official response from AI companies is available, as Explosive Media refused to reveal which specific generation tools were utilized to create the videos, and evidence identifying the exact models remains limited WIRED. This is the reality of the open-source intelligence war. Adversaries are no longer guessing what resonates; they are simply feeding the internet's own neuroses back to it, reliably wrapped in the comforting nostalgia of childhood toys.
4. Bypassing the Firewall: Content Moderation's Blind Spot
The fallout from this campaign highlights a significant moderation blind spot for Western platforms. Because the content is packaged as satire and utilizes toy aesthetics, it reliably bypasses both automated safety filters and the natural skepticism most users apply to foreign news sources. When content moderation algorithms scan for geopolitical disinformation, they are largely trained to look for deepfaked news anchors, doctored official documents, or coordinated bot text patterns. They are not historically trained to flag plastic toy figures rapping about foreign policy.
The amplification networks are highly robust. Explosive Media confidently claims to have over 2.5 million followers on various Iranian messaging channels WIRED. From these localized hubs, the videos are injected into the Western internet, shared widely across TikTok, X, and Instagram. The initial push provides the necessary engagement velocity to trick recommendation algorithms into treating the video as trending organic content.
Once on these platforms, the videos are further amplified by established Russian and Iranian state media apparatuses WIRED. This creates a feedback loop. A meme generated by a state-backed contractor goes viral, is then reported on by official state media as proof of shifting Western public opinion, and is subsequently amplified again. It is a closed-loop system of manufactured consensus.
The social platforms, already struggling to moderate standard AI-generated slopaganda, are demonstrably ill-equipped to handle state-backed geopolitical messaging disguised as a Lego rap battle MSN. The policies governing AI content are often written with deepfakes of real humans in mind. A cartoon rendering of a political figure exists in a grey area of parody and fair use, making rapid takedowns legally and procedurally difficult.
Furthermore, the sheer volume of content makes whack-a-mole moderation strategies entirely futile. By the time a platform identifies a specific video as state-backed slopaganda and removes it, the narrative has already been established, and three new variations have been uploaded. The bottleneck is no longer content creation; the bottleneck is the platform's ability to classify and moderate at the speed of synthetic media generation.
5. The Counter-Argument: Activism or State Apparatus?
Defenders of Explosive Media argue that they are a fully independent activist group creating organic content to mock political figures. They suggest the viral nature of the content is simply proof of its comedic merit and resonance with anti-war sentiment, rather than a coordinated state operation. From this perspective, the videos are a form of digital graffiti—a decentralized, grassroots response to American foreign policy that uses modern tools to speak truth to power.
Supporters point out that satire has always been a legitimate form of political dissent. They argue that labeling these videos as "slopaganda" is an attempt by Western media to delegitimize valid criticism simply because it originates from a hostile geopolitical region. If an American teenager using an AI generator to mock the Iranian government is practicing free speech, they ask, why is an Iranian team doing the same to American politicians automatically classified as a state-backed psychological operation?
This claim of independence, however, was fundamentally debunked when the group's representative explicitly admitted on the BBC that the Iranian regime is a "customer" Yahoo News. The distinction between a grassroots activist and a state contractor is financial, and Explosive Media has acknowledged crossing that line. When the state writes the checks, the output is state media, regardless of the aesthetic wrapper.
Additionally, security experts note that the team maintains stable, high-bandwidth internet access required for massive AI video generation in a country heavily restricted by state firewalls Yahoo News. In a heavily censored internet environment, the ability to rapidly download base models, upload high-definition video files, and coordinate across multiple banned Western social platforms is not a privilege afforded to ordinary citizens. It is a capability strictly reserved for those operating with the tacit or explicit approval of the state apparatus.
Therefore, while the content mimics the aesthetics of grassroots activism, the underlying infrastructure and financial backing reveal a sophisticated, state-sponsored operation. The use of parody is not an expression of independent dissent; it is a calculated tactic designed to exploit the very platforms it operates on.
6. Historical Precedent: The Evolving Toolkit of the Information War
While the TACO video feels like a novel hallucination of the 2026 internet, the strategy has documented precedents. Iran has previously utilized Lego-style aesthetics in its war propaganda. Similar videos were logged being shared by the Islamic Revolutionary Guard Corps in 2024, and again by Iranian state media to proclaim victory over Israel during the Twelve-Day War in 2025 WIRED. This demonstrates that the tactical use of toy imagery is an established playbook.
What we are witnessing is the redefinition of state media. The old model relied on authority; the new model relies on engagement. Historically, state propaganda attempted to project strength, moral clarity, and institutional weight. It wanted you to take it seriously. Modern slopaganda, by contrast, wants you to laugh, share, and forget where it came from. It sacrifices authority for virality.
As Moustafa Ayad, a researcher with the Institute of Strategic Dialogue, explained, audiences are "disengaging from some of the real conflict content and looking for something that can distill what's happening quickly and in a language and tone that they understand and that's what those Lego videos are doing" WIRED. This fatigue with traditional news formats creates a vacuum that highly engaging, easily digestible synthetic media is perfectly positioned to fill.
The evolution here is not just technological; it is psychological. Previous iterations of digital propaganda, such as the Russian interference campaigns of 2016, relied heavily on textual disinformation and manipulated photographs. Those campaigns sought to inflame existing tensions by pretending to be domestic political actors. The current wave of AI-generated slopaganda skips the pretense entirely. It doesn't need you to believe it is a real American posting the video; it just needs the content to be funny enough to share.
This shift mirrors broader trends in digital consumption. As attention spans shorten and the volume of online information increases, the most effective messages are those stripped of nuance and presented in the most visually arresting formats possible. State actors have realized that a 15-second animated clip is vastly more effective at shifting public sentiment than a lengthy white paper detailing regional grievances.
7. Analytical Verdict: Assessing the Architecture of AI Slopaganda
The evidence logged over the past few months points to a distinct shift in how geopolitical narratives are laundered online. The thesis holds true: the integration of accessible AI generation tools and universally recognized aesthetics like Lego has allowed state-backed actors to bypass traditional media literacy defenses and achieve broader reach than conventional geopolitical propaganda. This is not a theoretical vulnerability; it is an active exploit currently being leveraged against Western information ecosystems.
While the aesthetic is undeniably playful and the underlying AI generation is cheap slopaganda, the resulting campaign is highly sophisticated. By combining the universally understood visual language of toy bricks with targeted, culturally crowdsourced AI generation, state actors have successfully weaponized American internet culture against itself. They have turned the mechanisms of virality into delivery systems for state-sponsored narratives.
The architecture of this specific campaign relies on a trifecta of modern digital vulnerabilities: the democratization of high-quality AI generation, the engagement-at-all-costs structure of social media algorithms, and a public that is increasingly fatigued by traditional news formats. Explosive Media recognized these vulnerabilities and built a content pipeline specifically designed to exploit them. The result is a highly efficient, highly scalable propaganda apparatus.
Modern information warfare no longer requires massive budgets or hidden broadcast towers. It just requires a fundamental understanding of how we consume content online, and a handful of prompt engineers plausibly willing to build the narrative brick by brick. The traditional defenses against foreign propaganda—identifying the source, analyzing the bias, and countering the claims with factual reporting—are entirely ineffective against a strategy that relies on humor and aesthetic absurdity.
Ultimately, the success of the TACO video proves that the barrier to entry for effective geopolitical messaging has effectively dropped to zero. As long as social media platforms prioritize engagement over origin, and audiences remain susceptible to visually familiar formats, state-backed slopaganda will remain a potent tool. The problem is not just that the AI is getting better at generating convincing images; the problem is that state actors are getting better at understanding exactly what kind of slop we want to consume.