deepfakes
Google and Apple algorithms recommended AI 'nudify' apps. They called it 'suggested for you'.
Apple and Google’s search algorithms were found actively recommending AI 'nudify' apps, proving that in the app store business model, engagement beats safety.

In the sleek, curated ecosystem of the modern smartphone, the "walled garden" is sold as a fortress of safety. Multi-trillion-dollar corporations promise they vet every line of code to protect users from the digital wild west. Yet, recent investigations have documented a reality that is as clinical as it is grim. The systems designed to help you find a better calculator or a new puzzle game have been serving as high-speed delivery vehicles for tools designed to violate privacy at scale.
The persistent availability and algorithmic promotion of "nudify" apps on Google Play and the Apple App Store is a direct result of discovery systems that prioritize keyword relevance and transaction volume over established safety policies. This creates a structural conflict of interest that makes true moderation an secondary concern to revenue. While these platforms present themselves as passive neutral hosts when things go wrong, their internal search algorithms function as highly effective marketing agents. This is not a failure in the review process; it is a feature of an economy where discovery is decoupled from compliance.
The TTP Receipts: Buying Harassment in the Fortress
The depth of this systemic failure was first brought to light by the Tech Transparency Project (TTP). Their researchers documented how easily users could stumble into the world of nonconsensual intimate imagery (NCII) using nothing more than standard search terms. TTP’s findings showed that a simple search for "nudify" or "undress" on both major platforms yielded dozens of functional deepfake tools. These weren't obscure utilities buried on page ten; they were top-ranked results optimized for conversion.
Many of these apps were bolstered by official Apple Search Ads and Google Ads. These systems allowed developers to pay for prominence in specific, harmful keyword auctions, effectively letting them bid on the right to facilitate harassment. By accepting payment for these keywords, the platforms were not just hosting content; they were profiting from the intent to create NCII. This monetization of predatory search terms suggests that the safety filters are significantly less rigorous than the billing departments.

The investigation identified over 100 individual apps across both platforms. The mechanics of the failure are particularly automated: when a user downloaded one deepfake tool, the "Discovery Algorithm" would immediately populate the "You Might Also Like" section with similar software. This created a self-reinforcing feedback loop where the platform’s own intelligence was being used to facilitate the democratization of privacy violations. According to 404 Media, many of these apps explicitly marketed their ability to "see through clothes" in their metadata, a signal the automated review bots ignored in favor of potential transaction volume.
The TTP report noted that Apple and Google were not merely "hosting" these files as a static server would. They were actively categorizing, ranking, and promoting them based on user engagement metrics. This proactive role in the distribution chain undermines the platforms' claims of being mere intermediaries. When an algorithm "suggests" a deepfake tool, it is providing an implicit endorsement through the user interface of a "trusted" marketplace.
A Forensic Timeline of Looking the Other Way
To understand how we arrived at a point where the world's most valuable companies are allegedly promoting deepfake software, one must look at the timeline. This is not a new problem that caught Big Tech by surprise. It is a category that has been allowed to scale alongside the generative AI boom, often with the silent consent of the platforms' revenue engines.
- June 2019: The DeepNude Precursor. The "nudify" category entered the public consciousness with the launch of DeepNude. While its creators took it down within 24 hours due to the high probability of misuse, the code leaked and became the blueprint for the mobile app wave.
- Late 2022 - Early 2023: The Diffusion Boom. As stable diffusion and other open-source models became lightweight, the App Store saw a massive influx of "AI Photo Editors." Many of these used "stealth" functionality, presenting as standard filter apps during review and enabling deepfake features via server-side updates.
- November 2023: The First TTP Flag. TTP released its initial findings, showing widespread promotion of these tools. Apple and Google responded by removing specific apps named in the report but TechCrunch noted they failed to implement a categorical ban on the underlying keywords in their search engines.
- January 2024: The Taylor Swift Incident. A surge of NCII involving Taylor Swift went viral, highlighting that the tools used were often sourced from official stores. This led to renewed scrutiny from The Verge and Bloomberg regarding platform accountability.
- April 2026: The Second TTP Report. Documentation showed that despite multiple "crackdowns," the Discovery Algorithms were still suggesting deepfake tools to users. This proved that the platforms' "Whack-a-Mole" strategy is structurally inadequate for a problem driven by automated recommendations.
The persistence of these apps suggests that the platforms view them as a volume problem rather than a policy problem. By treating each app as an isolated violation, they avoid addressing the systemic reality that their search engines are optimized for this content. The timeline indicates that while the public-facing response is reactive, the internal economic incentive remains proactive.
The 30 Percent Blind Spot
The technical failure of Apple and Google to keep these apps off their shelves is often attributed to developer obfuscation. However, a more clinical analysis suggests the Profit-Discovery Paradox is the primary driver. Both platforms take a 15-30% cut of every subscription sold through their billing systems. Many nudify apps operate on high-cost subscription models, charging $20–$50 a month for "credits."
Developers use linguistic tricks to bypass the gatekeepers. An app might be named "AI Cloth Remover" or "Body Editor Pro" in the App Store Connect dashboard to avoid "blacklisted" terms. Yet, the Discovery Algorithm is trained on what users actually do. If users search for "nudify" and then consistently click on "Body Editor Pro," the algorithm learns that this app satisfies that intent. It does not care about the policy; it only cares about the successful transaction.
The 30% commission creates an unconscious bias in platform enforcement. High-revenue apps represent valuable partners in a market that has become increasingly saturated. The automated moderation bots are programmed to be efficient, which usually means permissive until a manual report is filed. When an app is generating six figures in monthly revenue, the threshold for manual intervention appears to be higher than for a low-performing utility.
This creates a scenario where the safety filters are looking at the dictionary definition of the app, while the discovery engine is looking at its behavioral truth. The safety team is playing a game of Scrabble, while the revenue team is playing a game of Big Data. As long as these two systems are decoupled, the most effective deepfake tools will continue to be the most visible ones.
The "Developers Are Too Smart For Us" Defense
It is necessary to fairly represent the position of the platforms. In various statements, both Apple and Google have argued that the sheer volume of submissions makes it technically impossible to catch every bad actor. They contend that developers use code-loading techniques that hide "nudify" logic until the app is already on a user's device.
"We don’t allow apps on Google Play that facilitate the creation or distribution of nonconsensual sexual content," a Google Spokesperson told NBC News. The defense is that they are being lied to by developers and are doing their best to clean up after the fact. They argue that a categorical ban on keywords would inadvertently sweep up legitimate photo editing tools, harming the broader developer ecosystem.
However, this defense is logically inconsistent. If the tools are so well-hidden that a human reviewer cannot find them, how does the Discovery Algorithm know exactly which apps to recommend to a user searching for "deepnude"? If the platform is blind to the app's function during review, it should be equally blind during search. The fact that search results are highly accurate while safety reviews are blind suggests the platforms possess the data necessary to identify these apps but choose to use it for engagement.
Furthermore, the "Suggested for You" feature is a choice, not a technical necessity. Platforms could easily disable recommendations for categories that are frequently flagged for policy violations. By choosing to keep the recommendation engine running on "grey" keywords, the platforms are prioritizing the efficiency of their marketplace over the safety of the public.
Legitimizing the Deepfake Economy
The accessibility of these tools has transformed NCII from a specialized harassment technique into a democratized crisis. When a deepfake tool is available in the same marketplace as a banking app, it gains a "halo effect" of legitimacy. Users who might have been hesitant to download shady software from a dark-web forum feel empowered to use tools vetted by Apple’s curation process.
The impact is documented in the rising volume of NCII cases targeting students and public figures. Because the barrier to entry has been lowered to a $9.99 in-app purchase, the volume of content is unmanageable for victims. Wired has documented how this economy relies on the infrastructure of major platforms for processing and distribution. The "walled garden" has become the primary infrastructure for the very harms it claims to prevent.
| Platform Metric | Google Play | Apple App Store |
|---|---|---|
| Identified Nudify Apps (TTP 2026) | 60+ | 45+ |
| Response Type | Reactive (Post-Report) | Reactive (Post-Report) |
| Search Ad Monetization | Confirmed | Confirmed |
| Commission Collected | 15-30% | 15-30% |
The failure to decouple discovery from safety has real-world consequences. By allowing the Discovery Algorithm to point users toward these tools, Apple and Google are effectively acting as the top-of-funnel for the deepfake industry. This is not just a policy oversight; it is a contribution to the scale of the NCII crisis.
The Curation Myth Hits the Algorithmic Floor
This incident serves as a post-mortem for the "Curation Myth"—the idea that a centralized app store is inherently safer than an open web. It reveals that as long as discovery systems are optimized for conversion, "walled gardens" will continue to harvest profits from the harms they prohibit. The curation is a marketing term; the transaction volume is the technical reality.
The "Whack-a-Mole" precedent is no longer a valid excuse for multi-trillion-dollar entities. If a platform can use AI to identify a copyright-infringing song in seconds, it can certainly identify a category of apps that use the same keywords to sell privacy violations. The failure is not a technical limitation; it is an architectural choice. The platforms have built systems that are too efficient at selling and too slow at protecting.
Legislative pressure is now mounting to address this gap. Lawmakers are looking at whether Section 230 protections should apply when a platform’s own recommendation engine is the primary driver of the harm. In Europe, the Digital Services Act (DSA) already places stricter obligations on Very Large Online Platforms (VLOPs) to mitigate systemic risks, including the distribution of illegal content.
The Verdict: Feature, Not Bug
The evidence from the TTP and 404 Media investigations supports the claim that this is a structural failure. The Discovery Algorithms on both iOS and Android were not malfunctioning; they were doing exactly what they were programmed to do. They found the most relevant product for a user's intent and maximized the probability of a transaction.
The "nudify" scandal is a predictable outcome of an ecosystem where safety is a manual process while discovery is an automated one. Apple and Google's walled gardens are optimized marketplaces where the gatekeepers take a cut of the contraband. Until the systems that suggest apps are held to the same compliance standards as the systems that approve them, the "suggested for you" carousel will remain a lane for facilitated abuse.
The analytical verdict is clear: the platforms' current moderation model is incompatible with their discovery model. You cannot have a "curated" store that simultaneously uses uncurated algorithms to drive sales. As long as engagement remains the primary metric for the search engine, the walls of the garden will remain transparent to anyone looking for a way to do harm.
PRESET: incident-report (Structured post-mortem of a specific AI failure — timeline, root cause, fallout) LANGUAGE: en TARGET WORD COUNT: 1500–2500