How Advertisers Can Adapt to the Age of AI Slop - Basis Technologies
Aug 11 2025
Megan Reschke

How Advertisers Can Adapt to the Age of AI Slop

Share:

Over the last few years, the internet has been flooded with AI-generated content: from bizarre viral imagery like shrimp Jesus, to lifelike dog videos that blur the line between real and synthetic, to fabricated news headlines and disturbingly realistic depictions of violence and tragedy.

Where such content may once have felt fringe or experimental, it has now quickly become mainstream. And while its quality and intent varies widely, the collective impact is clear: It is reshaping what people see, share, and trust online. AI slop has become a defining feature of the digital media landscape, fueled by algorithms that prioritize engagement over authenticity and by bad actors seeking to monetize low-quality content at scale.

As AI slop floods digital spaces, it’s altering audience behavior, skewing performance signals, and complicating efforts around brand safety and media quality. For marketing leaders, navigating this new reality requires more than reactive efforts. It demands a proactive strategy that can adapt to evolving content ecosystems, prioritize brand safety and suitability, and ensure that campaigns continue to deliver meaningful business outcomes.

How AI Slop is Impacting Advertisers

While AI has introduced powerful efficiencies for advertisers, the growing presence of AI slop has introduced new challenges. And recent data confirms that marketing teams are taking notice: 57% of advertisers now view AI-generated content as a key challenge for the digital advertising ecosystem, and 54% believe it has led to a decline in overall media quality.

“AI slop is diminishing the quality and trustworthiness of digital advertising as a whole,” says Mallory Chaney, VP of Integrated Client Solutions at Basis. “However, it also opens up opportunities for brands to cut through the noise by investing in authentic, differentiated messaging.”

One of the most significant concerns is the impact of AI-generated content on made-for-advertising (MFA) sites. With the widespread availability of gen AI tools, bad actors are increasingly using them to mass-produce low-value content with little to no editorial oversight. In one recent example, more than 200 AI-driven websites impersonating sports news outlets were found blending synthetic content with stolen reporting, creating an illusion of legitimacy while undermining the integrity of the media environment.

And, as evidenced by phenomena like shrimp Jesus, AI slop isn’t limited to MFA sites. It is showing up in nearly every corner of the internet. On Quora and Medium, for example, AI-generated material jumped from 1.77% and 2.06% in 2022 to 37.03% and 38.95% in 2024, respectively. Social media platforms have seen similar patterns, with AI-generated content increasingly filling feeds and recommendation algorithms. The rapid pace of this growth signals that AI-generated content is steadily becoming a widespread and dominant force across the online ecosystem.

Compounding these challenges is the way algorithmic systems reward this content. Social platforms, in particular, often promote low-quality AI slop even if users don’t follow the pages that share it, creating feedback loops that reinforce and scale artificial content. This algorithmic promotion can inflate impression volumes far beyond what that content would naturally earn based on user preference, making it difficult for advertisers to gauge authentic audience interest.

The implications for advertisers are serious. Low-quality AI slop increases brand safety concerns by putting campaigns at risk of running adjacent to misleading, plagiarized, or even harmful content that may not align with a brand’s values. It distorts performance metrics by inflating impressions and engagement on content that may not reflect authentic user demand. It can also contaminate optimization models—if campaigns consistently run against algorithmically amplified but low-quality content, the resulting engagement signals may not accurately represent true audience behavior, leading to flawed optimization decisions. And it wastes media dollars by funneling ad spend into inventory that delivers minimal value—and potentially even harm. These challenges are amplified in today’s climate of economic turbulence, where marketers are under increased pressure to prove the ROI of every dollar spent.

Navigating Brand Safety and AI Slop

Though the proliferation of AI slop poses many challenges for advertisers, its ability to undermine brand safety at scale presents a growing risk. This is, in large part, because such content is often designed to pass as human-made. As such, detecting AI-generated content requires looking beyond traditional content signals. According to Chaney, performance monitoring can reveal telltale signs of low-quality AI content: unusual URLs, high impression delivery to sites outside campaign parameters, and metrics patterns like high impression volume paired with low click-through rates or elevated bounce rates.

This detection challenge is compounded by the fact that many platforms don’t yet offer advertisers the option to avoid AI-generated content altogether. On YouTube, for example, advertisers can specify the types of video content they want their ads to run alongside—such as avoiding sensitive or graphic categories—but they cannot filter based on how that content was created. As a result, campaigns may run adjacent to synthetic videos that lack editorial oversight, factual grounding, or alignment with brand standards.

To adapt, leading advertisers are evolving to adopt more intelligent, layered approaches that combine pre-bid verification tools, contextual intelligence tools, and curated inventory strategies to both detect and avoid low-quality AI content. “Contextual intelligence engines, in particular, are resilient against AI-generated content because they can understand semantic context rather than just keywords and terms,” says Chaney.

At the same time, many teams are working with partners that monitor supply paths dynamically and identify domains known to misrepresent quality or exploit optimization systems through mass-produced, MFA-style content. “Advertisers are also turning to more premium inventory like private marketplaces and programmatic guaranteed deals, which are generally less susceptible to fraud,” says Chaney. Custom blocklists and curated PMPs can add another layer of protection against low-quality content.

As content creation and monetization methods continue to evolve, teams that embrace this kind of approach will be better equipped to maintain brand integrity without sacrificing scale or efficiency.

How Marketing Leaders Can Prepare Their Teams

As AI slop continues to reshape the media landscape, marketing leaders are under pressure to ensure their teams are prepared—not just to react, but to plan intentionally. Leaders who embrace a cross-functional strategy grounded in adaptability will be well-positioned to both deal with the current challenges and to adjust as the digital media landscape continues to shift.

To ensure their ads aren’t running alongside AI slop, marketing leaders should first assess exposure. Teams should audit recent campaigns for unusual traffic patterns, off-target site delivery, or inflated impression counts tied to weak engagement—common signals of low-quality AI content, according to Chaney.

From there, partner evaluation criteria should evolve. Questions that once focused primarily on brand suitability and safety should now include whether partners can identify AI-generated content, how frequently their detection logic is updated, and how easily their tools integrate with existing tech stacks.

Internally, leaders should align media buyers, brand safety stakeholders, and analytics partners around a shared response plan. This could include establishing clear protocols for identifying AI content and flagging questionable placements. Education also plays a critical role, as teams trained to spot the signs of AI slop are better equipped to escalate concerns. “Continuing to recognize AI-generated content as well as understand both the strengths and limitations of tech will help advertisers to balance available tools with human oversight,” says Chaney. With early investments in processes and training, marketing leaders will be far better prepared as AI-generated content continues to spread—and potentially becomes more difficult to detect.

Looking Ahead: How Advertisers Can Strategize Around AI Slop

In a digital media environment shaped increasingly by low-quality AI content, brand safety, performance accuracy, and trust are all on the line. Marketing leaders that adapt now won’t just protect their campaigns—they’ll build trust with audiences and maintain competitiveness in a rapidly transforming digital media landscape.

__

Looking for more insights on how AI is transforming digital advertising? We surveyed marketing and advertising professionals from top agencies, brands, and publishers to get a pulse on how teams are using and feel about the technology today, and how it could change things going forward.

Get the Report