Digital advertising has long promised relevance through data. As the industry has grown more sophisticated, the wealth of available data has allowed teams to craft increasingly personalized messages, pinpoint the ideal individuals to deliver those messages to, and measure the results of their efforts.
But as privacy regulations tighten, signal loss accelerates, and consumers continue to push back on tactics perceived as invasive, the industry is evolving to prioritize privacy. At the same time, with access to some kinds of data growing more restricted, advertisers are rethinking how they deliver relevance at scale. In this context, community-based marketing approaches are gaining momentum, offering ways to engage meaningfully with consumers while staying ahead of the shifting data landscape.
Taking a community-centric approach to marketing can include identifying and targeting a specific group based on a shared identity (i.e., sports fans, pet lovers, or gardeners)—rather than hyper-personalizing ads based on more sensitive personal information. It can also mean building a sense of community among groups of consumers, whether that be in person or online. Through both approaches, marketing teams can forge meaningful connections with target audiences and deliver personalized messages in ways that respect individual privacy and meet evolving data privacy standards.
In many ways, community-based marketing represents a return to advertising’s roots, where community played a central role in shaping brand perception and consumer behavior. From iconic campaigns tied to sports fandoms, regional culinary pride, or social movements, brands have long sought relevance through shared identity. Today, advertisers can layer in modern data signals to these approaches, enhancing the precision of such strategies while respecting consumer privacy.
Personalization in digital advertising is no longer as simple as it used to be, particularly as the industry contends with the growing challenges of signal loss and consumer privacy demands. A staggering 95% of data and advertising leaders across brands, agencies, and publishers predict continued signal loss and privacy-focused legislation in the coming years, and almost 90% of ad buyers say they are reorganizing their personalization tactics, ad spend, and data mix to adapt to increased regulation and signal loss.
At the same time, consumers are more resistant to hyper-targeted ads than advertisers may expect. Over half have occasionally thought, “Who approved this?” when coming across targeted ads, and many say they are uncomfortable with how much personal data companies have and feel that companies overdo prepurchase personalization. Even more, 62% of consumers say they object to ads based on sensitive personal data, and nearly half feel they’ve been targeted by an ad that offensively stereotypes them. These sentiments are forcing marketers to reconsider their approaches and rethink how they engage audiences without crossing privacy boundaries.
In today’s privacy-centric digital landscape, brands and advertisers are discovering (or, more aptly, rediscovering) that engaging audiences through shared values, experiences, and interests is an effective way to build connections—without compromising privacy.
“We’re seeing brands embrace this idea of connecting with communities and also fostering community with their own consumer base,” says Susan Mandell, VP of Brand Development at Basis. “When brands align with communities, consumers don’t just buy into the product—they also buy into a shared identity and sense of belonging.” While this idea isn’t new, the vast amount of data available to marketing teams today allows these community-driven strategies to be more precise and effective.
A financial services company, for example, could use first-party data to segment customers with a shared interest, such as retirement planning. Beyond delivering personalized tools and messaging based on that shared financial goal, the brand could foster deeper engagement by offering spaces for these individuals to connect, ask questions, and share tips, such as through an online forum or members-only webinars. By taking such an approach, the brand isn’t just targeting individuals based on a data point, but rather cultivating a sense of belonging around a shared need.
Alternatively, a local retail brand could use customer data or media placements like digital out-of-home (DOOH) to connect with a geographically defined community: people who live nearby, work in the area, or regularly visit the store. By offering this local community special benefits and experiences—such as a local loyalty program or neighborhood events—the brand can help foster a sense of community, build brand loyalty, and ensure their messaging resonates with local audiences.
For community-based media strategies to work, brands must start by thinking about who they are, what they stand for, and which communities it makes the most sense to show up in and connect with.
“We’re in a moment where smart brands are really thinking about their personality, persona, and core values, and then translating that to the communities they want to connect with, either tangibly or in an aspirational way,” says Mandell.
From there, marketing teams can layer on the right tools, tactics, and partnerships to bring those strategies to life:
Research where communities are already spending time and find opportunities to reach them in key moments of impact across multiple channels. For instance, advertisers might use CTV to connect with avid sports fans cheering on their home team, then reinforce that message through social content tied to influencer-led fan groups or DOOH placements around the stadium. Podcasts can deepen connections with listeners who share a specific passion, while social media can extend that engagement with interactive content or community conversations. And location-based media—like DOOH highlighting iconic regional drinks or local college loyalty—can work alongside geofenced digital campaigns to reflect the identity and culture of the community being reached.
Move away from third-party data dependency and toward first-party data unification and enrichment. Seek out partners that make it easy to collect, store, and integrate disparate data sources—such as website interactions, loyalty programs, and commerce platforms—into a single, privacy-safe environment. A unified view helps brands and marketers better understand the communities they connect with, allowing them to activate more relevant, privacy-friendly media strategies.
Focus on values-driven storytelling and community moments over hyper-targeted, one-to-one messaging. When creative speaks to a shared identity—whether it’s a commitment to sustainability, pride in a hometown, or a shared love of gaming—it builds emotional connection and trust. Meaningful messaging should reflect the real-life aspirations, interests, and values of a community, not just assumptions based on demographics.
As personalization changes amidst increasing signal loss and heightened consumer demand for data privacy, community is emerging as a powerful lever for building long-term brand connection—particularly since marketing teams can leverage data in privacy-friendly ways to better understand where and how people come together around shared passions and target them, as communities, more effectively.
By leaning into relevance, resonance, and real connections through community, marketers can elevate personalization in a way that feels more human and more durable in a privacy-first digital environment. This approach helps build trust, cultivate long-term loyalty, and create deeper emotional connections that ultimately lead to stronger brand performance in a rapidly evolving, increasingly competitive market.
In 2025, consumer trust is anything but a given. In fact, for many audience segments, distrust is the norm.
The 2025 Edelman Trust Barometer characterizes today’s climate as a global “Crisis of Grievance,” the result of events over the past 25 years—from the Iraq War to the 2008 financial crisis to the COVID-19 pandemic—that have chipped away at trust in leaders and institutions. Evidence of this crisis includes an unprecedented decline in employees’ trust in their employers to do what’s right, record-high levels of concern that leaders lie to the public, and four in 10 global respondents reporting that they view hostile activism (including threats or even violence) as a viable means for driving change. That ratio rises to 1 in 2 among people between the ages of 18 and 34.
Among major institutions, brands still hold a relatively advantageous position—ranking as more trusted than government, NGO, and media entities—but this climate of growing distrust is impacting them as well. In 2024, 71% of global consumers agreed with the statement, “I trust companies less than I did a year ago.” Brand trust from Gen Z, a demographic that makes up about 20% of the US population and is projected to account for $12 trillion in spending power by 2030, is particularly difficult to come by.
Marketing and advertising professionals also face consumer skepticism around an assortment of industry practices such as data privacy (64% of global consumers think companies are reckless with customer data) and the use of artificial intelligence (one study found that mentioning AI’s use in products often reduces emotional trust and lowers purchase intentions).
Even more, consumers have grown increasingly willing to voice their displeasure by “voting with their wallets”: 2025 has seen a wave of viral consumer boycotts, from “a total economic blackout” on February 28, to a weeklong Amazon boycott in March, to planned boycotts of General Mills, Nestle, Target, and Walmart. In just the first few months of 2025, 43% of American consumers have shifted their spending to align with their morals, and 36% report “opting out” of various aspects of the economy. Perhaps more alarmingly, consumers are pulling back on spending as a result of uncertainty around the Trump Administration’s shifting plans for tariffs and trade policies, which means brands will have to work even harder to earn their dollars.
Brands cannot take consumer trust for granted amidst today’s crisis of grievance. As leaders strategize around how to earn and maintain that trust, authenticity, consistency, data privacy, and brand safety must remain top of mind.
Over the past decade or so, the rise of conscious consumerism has led to brands increasingly taking social and environmental stands. But consumers and stakeholders demand authenticity and expect a coherent alignment between brands’ words and their actions: Those who try to talk the talk without walking the walk will quickly garner backlash and lose consumer trust—and dollars—as a result (see: “greenwashing;” “rainbow washing;” “woke-washing,” etc.).
But a key aspect of effective authenticity that’s less discussed (though equally important) is consistency. “Brands need to be loud and proud about what they stand for,” says Molly Marshall, Client Strategy and Insights Partner at Basis. “But they also need to do that consistently. When brands try to please everyone or shift their values according to the cultural or political climate, that’s when they receive backlash.”
A cautionary tale around consistency (or lack thereof) has been playing out in the industry in recent years in relation to Target. In 2023, the brand received blowback for the Pride month-themed merchandise it featured, with conservative consumers and social media creators encouraging a boycott. Target, which at that point had celebrated Pride month with Pride-themed merchandise for over a decade, then released a statement that it would remove some of the items from that year’s Pride collection in response to the backlash. This, in turn, garnered even more negative reactions: The company received bomb threats accusing it of betraying LGBTQIA+ people, and a coalition of 15 state attorneys general came together to encourage the brand to stand by the LGBTQIA+ community. Panelists discussing LGBTQ brand advocacy at SXSW the following year agreed that Target’s decision to walk back its stance in the face of backlash ultimately made things worse for the brand. Still, the controversy continues in 2025, with the State Board of Administration of Florida recently filing a class-action lawsuit against Target alleging that the brand misled shareholders about the risks associated with its 2023 Pride Month campaign, resulting in billions of dollars in investor losses.
A similar controversy began in January, when Target—known for its strong support of diversity, equity, and inclusion (DEI) initiatives and Black-owned businesses after the 2020 George Floyd protests—announced plans to scale back several DEI programs. This move aligns with a broader trend among major brands and comes as the Trump Administration takes steps to end government DEI programs. The retail giant received swift blowback from consumers, and a 40-day boycott organized by Reverend Jamal Bryant began on March 5. “When consumers see a brand like Target—which had previously committed to DEI—pull those commitments back, they’re going to wonder if they can trust them to authentically act out their brand values or if they’re just going to react based on what’s happening politically,” says Kate Diehl, Group VP of Integrated Client Solutions at Basis.
Target’s year-over-year foot traffic has also fallen for 5 consecutive weeks, though it’s unclear whether this is a direct result of the boycott. Continued consumer pullbacks could hit Target especially hard, as the ongoing threats of tariffs could lead to huge price spikes and drive down the company’s economic outlook.
The takeaway for brands? The combination of today’s crisis of grievance with the rise of conscious consumerism and a polarized political climate means that taking a stand on social and political issues comes with real risk. Brands should only take a stand when they can back it up with authentic action and are prepared to weather criticism. And, when pushback comes, the best response is often to stay the course and maintain their initial stance to demonstrate consistency.
That said, for many (if not most) brands, the best move may be to simply not take a position on such polarizing issues. Just 38% of adults in the US feel that businesses should take a public stance on current events, and many brands whose products or services aren’t related to social or political issues or politicized communities may reasonably choose not to engage in those issues.
Ensuring the ethical use of consumer data is another key strategy for brands looking to build trust with consumers in 2025, with a recent PwC report finding that 88% of consumers say protecting customer data is one of the most important factors in brands’ ability to earn their trust. “Data privacy is considered table stakes by consumers at this point,” says Marshall. “Still, brands and advertisers are struggling to implement it consistently.”
Going into 2024, nearly half of US marketers felt their organizations were unprepared to succeed in a cookieless world. And even though Google no longer plans to fully deprecate cookies in its Chrome browser, its plans to give users a choice over how they’re tracked with cookies in Chrome is expected to have essentially the same impact. It’s estimated that nearly 90% of US browsers could be cookieless once user choice comes to Chrome, with additional factors such as Apple’s App Tracking Transparency, Safari and Mozilla’s default-blocking of third-party cookies, and privacy-minded digital advertising regulations all contributing to widespread signal loss.
As a result, first-party data has emerged as a top privacy-friendly identity solution among advertisers. “Even beyond building trust with consumers by respecting their data privacy, advertisers need to be able to rely on privacy-friendly solutions like first-party data to successfully target and measure their campaigns as signals drop off,” says Diehl. At the same time, first-party data comes with an ethical responsibility for advertisers—to gather, organize, store, and leverage that data in ways that preserve consumers’ privacy.
With over 90% of digital advertisers reporting that they use generative AI in their work at least once a month, marketing teams must take even more care to protect customer data. Some AI-driven tools, particularly those used in collecting and analyzing data, present serious data privacy risks. And with 57% of consumers believing that AI poses a significant threat to their privacy, this is an area where brands stand to garner significant distrust if they don’t take the proper precautions.
Finally, advertisers looking to build more trust with consumers should work to prioritize brand safety across their campaigns, with recent industry developments making this focus all the more critical.
Advertisers have felt increasing concern around brand safety for years now—indeed, it’s programmatic advertisers’ top concern by a significant margin. The rise of generative AI has only amplified these concerns, with 100% of marketers agreeing in a recent survey that generative AI presents a brand safety and misinformation risk, and 88.7% describing that risk as moderate to significant.
Social media carries the highest brand risk of all digital media channels, according to just over half of advertisers. Just recently, Meta apologized after Instagram users reported seeing extreme violence in their Reels feeds, including videos of people being murdered. “This is an example of a brand safety concern that’s really hard for brands and agencies to get ahead of,” says Marshall. “I do think it brings up larger questions around what platforms are safe and effective for advertisers, and what consumers expect from brands who run on those platforms.” Even more, content moderation rollbacks at social media platforms like Facebook, Instagram, and X have aggravated the riskiness of social media environments.
Beyond social media, brand safety made headlines recently as a result of an Adalytics report that found that multiple adtech companies have placed ads for major brands on websites hosting CSAM. (Note: The report cites Basis as an adtech vendor who did not serve ads on any of the sites in question.) This extraordinary brand safety crisis underscores how critical it is for brands to have robust and multi-layered systems to ensure their advertising content is only shown in safe and suitable environments—not only to safeguard consumer trust, but to avoid ethical catastrophes such as these.
Of course, there’s a case to be made that consumers are now savvy enough to know that brands aren’t choosing to serve ads next to disturbing content, hate speech, or misinformation—particularly on social media. Still, 82% say it’s important to them that the content around online ads is appropriate, and three-quarters say they would feel less favorable towards brands that serve ads on sites that contain misinformation. Considering this majority opinion as well as the broader culture of consumer distrust, brands who prioritize brand safety likely stand to gain a competitive advantage over their peers who take a laxer approach.
In the face of deepening consumer distrust, heightened social tensions, and growing scrutiny around corporate behavior, earning and maintaining trust from target audiences will be a defining priority for today’s most successful brands—and authenticity, consistency, data privacy, and brand safety will be foundational elements of their strategies.
For marketing leaders, now is the time to double down on trust as a core metric of success. Failing to do so may carry financial consequences in a world where consumers are spending more intentionally and brand loyalty is increasingly difficult to earn and maintain.
Connected TV has become a driving force in digital advertising, with many media buyers now bringing over a decade of experience to the table. Yet, the landscape continues to evolve rapidly, as new technologies, shifting consumer habits, and advancements in measurement reshape how advertisers approach the space.
In this webinar, Comscore Vice President of Emerging Solutions Becca Marco joins host Noor Naseer to break down essential strategies, best practices, and trends to help advertisers make the most of the CTV opportunity in 2025.
Reaching the right person with the right message at the right time has long been the cornerstone of advertising strategy. But in an era where misinformation spreads quickly and consumer trust is difficult to earn, advertisers must also consider the integrity of the content their ads appear alongside.
According to the World Economic Forum, experts across the globe have identified misinformation and disinformation as the most severe global risk over the next two years. While this threat affects everyone, it poses unique challenges for the advertising industry. More and more AI-generated content is flooding the internet, and AI is increasingly being used to create made-for-advertising sites (MFAs), which are filled with low-quality content at best and blatant mis- and disinformation at worst. In this context, marketing teams must double down on brand safety efforts. At the same time, advertisers have a responsibility to ensure their budgets don’t inadvertently fund the spread of misleading or harmful information.
That responsibility has grown harder to fulfill amidst recent content moderation rollbacks. Several platforms have eliminated or reduced their programs for regulating the spread of mis- and disinformation, instead opting for community-driven approaches that outsource content moderation to users rather than dedicated teams of professionals. Meta, for instance, recently removed its fact-checking program, instead adopting a user-sourced “Community Notes” approach (similar to the one used by X). The company also updated its Hateful Content guidelines to allow previously banned content to remain on the platform. These changes signal a broader industry trend—one that puts more onus on advertisers to ensure their media dollars align with brand safe, responsible content.
Amidst these challenges, it’s no surprise that brand safety and suitability are at the forefront of advertisers’ minds. In fact, 60% of programmatic advertisers say that it’s their biggest concern. The question is: How can advertisers take a proactive approach to misinformation to not only protect their brands but also build a safer, more trustworthy digital ecosystem?
Noor Naseer, Basis Technologies’ VP of Media Innovations & Technology, explored this challenge in detail in her presentation at SXSW 2025 in Austin. She unpacked how association with misinformation impacts brand perception and consumer trust, shared strategies for maintaining authenticity and credibility, offered tips for ensuring brand safety, and more. Check out the video below for all her insights and recommendations on how marketers can tackle advertising in the misinformation age:
__
Want to dive further into Noor’s presentation? Click here to download the slides from her talk.
AI and misinformation are shaping up to be two of the most disruptive forces in advertising today.
As generative AI rapidly evolves, it is transforming nearly every part of the advertising workflow—spanning creative development, audience targeting, personalization, media planning, and beyond. But alongside its various benefits, gen AI also brings risks. Brand safety is among the most pressing, with a whopping 100% of marketing professionals believing the technology poses a threat in terms of brand safety and misinformation.
Generative AI is a significant driver of the spread of low-quality content and mis- and disinformation online. As gen AI tools become more widely available, bad actors are using them to rapidly produce low-quality content, such as made-for-advertising sites and misinformation-filled webpages designed around popular search terms. The challenge of misinformation is compounded by the recent trend of social platforms eliminating or rolling back their content moderation efforts.
For brands and advertisers, this raises significant brand safety concerns, including damage to reputation, wasted media spend, and erosion of consumer trust. And with the current administration signaling a pro-AI, deregulatory stance, the landscape is becoming more complex, leaving brands and advertisers to navigate both new opportunities and rising risks without much federal regulatory guidance.
In this context, it’s critical for industry professionals to stay informed. Keeping up with these evolving threats isn’t just about gaining a competitive edge—it’s essential to leveraging innovation responsibly and building trust with consumers. To that end, here are several essential resources to help digital advertisers navigate misinformation and AI in 2025:
After years of increased regulatory scrutiny, advertisers face a starkly different environment under the Trump Administration. With its pronounced pro-AI stance and lighter regulatory approach, this new landscape presents new opportunities and challenges that stand to significantly reshape the marketing industry.
How do marketing and advertising professionals really feel about AI? From insights on how teams are leveraging the technology, to its impact on efficiency, to how industry professionals believe the tech will impact the future of marketing, this comprehensive report offers an in-depth look at the current sentiments, challenges, and opportunities surrounding AI in marketing.
AI introduces a range of new risks and considerations for advertisers, including potential legal challenges related to data privacy, deceptive advertising, and copyrighting, as well as consumer distrust of the technology. Learn how advertisers can balance these challenges while still leveraging AI’s benefits.
How does association with misinformation impact brand perception, audience trust, and consumer behavior? And how can advertisers embrace media strategies that build and maintain brand authenticity and credibility? In her 2025 session at SXSW, Basis’ Noor Naseer offered a deep dive into these topics.
Amidst a broader trend of content moderation rollbacks, Meta announced they will no longer use fact checkers for content posted to Facebook, Instagram, or Threads. The company has also updated its Hateful Content guidelines to allow users to post controversial and/or offensive content that was previously banned. Together, these actions introduce new brand safety risks for advertisers.
Social media is a critical part of any brand’s marketing mix. But with the rising proliferation of hate speech and mis- and disinformation on these platforms, compounded by recent content moderation rollbacks, advertising leaders must stay informed and proactive in monitoring and addressing the issue to protect their brand/clients.
Connected TV is one of the fastest-growing advertising channels. Yet with its rapid growth has come increased risks, particularly around brand safety. To harness the full potential of this booming channel, teams must take a proactive approach to prioritize consumer trust and brand safety from the start.
Jaime Vasil is Group VP of Candidates and Causes at Basis
Political advertising’s connected TV (CTV) evolution was in full effect during the 2024 US elections. The trend that started to show life two cycles ago is now a regular part of the media plan for candidates, campaigns, PACs, and other organizations looking to reach and influence voters.
CTV’s emergence as a political powerhouse has been developing steadily over the last four election cycles. In 2020, we saw a dramatic rise in programmatic advertising and the awakening of CTV for elections. In 2022, candidates and causes began taking CTV more seriously and lifting direct spending, as that was the primary tactic for buying inventory at the time. This development continued to evolve in 2024.
Based on an evaluation of more than 1,400 campaigns for state, local and national races that managed their digital ad buying through Basis’ advertising automation platform—accounting for more than $130 million in political ad spend across video, display, native, audio, and text ads—we saw that:
We may not need to wait until 2026 to see how these trends develop further. With key state and local races in 2025, and seemingly ever-increasing political spending, political marketing practitioners can apply and optimize what they’ve learned recently. There is no rest in political campaigning, and this may well be the new normal.
Key Takeaways:
Video ads in their multiple forms are a staple for political advertising as it allows for rich story telling about candidates, opponents and issues. It has been the majority of ad spend ever since we started tracking it in 2018. With CTV ad opportunities growing faster than ever, more availability through programmatic buying, and multiple targeting and measurement options, there may still be more room to expand video’s ad spend share.
Key Takeaways:
Availability of CTV ad opportunities is compelling political marketers to spend on these devices. There are more ways to buy it today than two years ago. In programmatic channels, marketers have more choices for publishers, vendors and tactics. With CTV, private marketplaces and programmatic guaranteed deals that are much more prevalent today may be suitable for campaigns that were buying ads through direct publisher engagement in 2022.
Key Takeaways:
Basis’ programmatic CPM index showed steady, albeit below average, pricing in the first five months of 2024. It began increasing gradually beginning in July and then peaked in the last 35 days of the election. The index compares the average pricing per month to the average CPM for the whole election cycle for political marketers. Video was the driver of CPM increases, as display pricing dipped to average level in the summer and early fall.
Considering the competitive spending in the last months of the election, political advertisers could alleviate the inflation by locking in deals in programmatic channels through private marketplaces and programmatic guaranteed deals.
Key Takeaways:
Based on political advertising sales, Basis data shows the top suppliers in open bidding programmatic, top exchanges and SSPs that sell PMPs, and top curated deal groups that Basis pre-arranged for all elections advertisers. The presence of CTV was spread throughout the leading suppliers.
Basis curated 50+ deal groups that were approved for political advertising. The deal groups were assembled from one publisher, multiple publishers, or multiple PMPs.
Our previous reports focused on direct sellers, which were dominated cycle-over-cycle by YouTube, Hulu and Facebook, and also showed the increasing popularity of CTV vendors.
Key Takeaways:
Political campaigns are saving half of their money until the end to reach undecided voters and get out the vote. This is the same pattern as in previous election years. However, as voters gain more opportunities to submit ballots early, this number eroded slightly this cycle—even when the country was energized for a presidential election. A pattern to watch for the future is whether or not campaigns will continue to encourage early voting.
Key Takeaways:
California and Michigan garnered the most ad impressions. Of note, California had large proposition campaigns in 2024 and Michigan consistently has been a swing state for the past few cycles. They were followed closely by battleground states, as well as Washington.
Beyond the top 14 states, the rest of the country’s states received minuscule shares of ad impressions, even in populous states such as Florida, New York and Illinois.
Basis served more than 560 million programmatic ad impressions in the final 10 days of the election cycle and served more than 800 billion programmatic CTV ad impressions for the entire 2024 cycle.
The collision of programmatic and CTV has transformed elections advertising. The next evolution of these trends will revolve around the use of private marketplaces, programmatic guaranteed buying and curated inventory. The targeting within these methods is poised to improve, setting the stage for significant use of audience data in the next election.
-----
Basis has been trusted by agencies and consultants in politics, public affairs, and advocacy for over 17 years. Basis is a comprehensive advertising automation platform with an integrated suite of modular applications, each specializing in unique areas such as planning, operations, reporting, and financial reconciliation across programmatic, publisher-direct, search, and social channels. Since 2007, Basis has helped power digital media for thousands of political campaigns, independent expenditure committees, and issue advocacy advertisers. Basis is headquartered in Chicago with clients throughout North America, South America and Europe, including in Washington, D.C., with its Candidates + Causes team.
Learn more about Candidates + Causes advertising with Basis
In a time of obsession over AI and automation, are too many media buyers choosing to set it and forget it? Albert Thompson, Managing Director of Digital Innovation at Walton Issacson, thinks so.
In this episode, Thompson shares how bringing more human insight to programmatic campaigns is vital for optimal media performance. Together with host Noor Naseer, he explores how buyers can blend automation with human expertise to drive more successful campaigns.
As generative AI and content moderation rollbacks transform the social media landscape, advertisers must navigate a new era of brand safety challenges.
Consumers, advertisers, and regulators alike have long voiced concerns around the spread of misinformation and hate speech on social media platforms. In 2024, global information experts ranked social media owners among the top threats to a trustworthy online news environment—just the latest in a long history of criticism over their inability to mitigate (or disinterest in mitigating) the proliferation of harmful content.
Researchers have found that social media algorithms can amplify hate, divisive content, and misinformation, in part because these algorithms are designed to showcase posts that will receive high levels of engagement, and inflammatory content often garners lots of comments and clicks. These concerns have hit new levels of urgency in recent years with the rise of generative AI, which can be used to create deepfakes and other forms of disinformation at greater scale and lower cost, making it easier than ever for bad actors to craft disinformation campaigns.
At the same time, the biggest players in the social media space have recently revamped and rolled back their systems for moderating content, with critics worrying the changes will make it even easier for hate speech and misinformation to proliferate on those platforms.
The spread of hate speech and mis- and disinformation on social media is everyone’s problem—from the social platforms themselves, to the consumers who spend nearly two and a half hours a day with them, to the advertisers who will spend over $100 billion on them this year. Because social media is such a critical part of any brand’s marketing mix, and with these problems likely to intensify as AI evolves and content moderation is reduced, advertising leaders must strategize to protect their brands/clients and consumers in this new era of brand safety.
In tandem with concerns around the spread of hate speech and misinformation on social media, advertisers have grown increasingly worried about brand safety and brand suitability, naming it their top programmatic advertising concern while ranking paid social as the channel with the highest brand safety risk.
The emergence of AI has only heightened those fears, with one recent survey finding an astonishing 100% of marketers agreeing that generative AI poses a brand safety and misinformation risk to their industry, and 88.7% calling the threat moderate to significant.
Advertising professionals are right to feel concerned, with over 80% of consumers saying it’s important to them that the content surrounding ads is appropriate, and three-quarters saying they feel less favorable towards brands who advertise on sites that spread misinformation. Even more, 89% of Americans say they feel that social media companies should implement stricter policies to curb the spread of misinformation on their platforms. Those social media companies, however, have a long history of failing to do so.
Social platforms have been in the spotlight because of their penchant for amplifying hateful and inaccurate content for a while now. Back in 2016, a Buzzfeed editor discovered a cluster of fake news sites registered in Veles, Macedonia, which spread false stories that circulated widely on Facebook. These articles, which were run for profit via Facebook ads, gained massive traction on social media during the US presidential election due to their sensationalism, with headlines like “Pope Francis Shocks World, Endorses Donald Trump for President.”
This marked the beginning of the public’s understanding of “fake news” and its circulation on social media. Fast-forward to 2022, and Meta, Twitter (now X), TikTok, and YouTube were under investigation by the US Senate Homeland Security Committee, which found that the social media companies’ business models amplified “dangerous and radicalizing extremist content, including white supremacist and anti-government content.”
Around the same time, a NewsGuard investigation explored the dissemination of misinformation on TikTok. Researchers found that when they searched keywords related to important news topics such as COVID-19 and Russia’s invasion of Ukraine, almost 20% of the search results contained misinformation. This is especially worrisome today, given that about four in 10 young adults in the US say they regularly get their news from TikTok.
While the amount of misinformation on social media was alarming back in 2022, it’s only grown more so in the years since as generative AI has risen in prominence. Today, generative AI tools equip users with the ability to quickly create convincing fake photos, videos, and audio clips—tasks that, just a few years ago, would have taken entire teams of people as well as time, technical skill, and money. Now, over half of consumers are worried that AI will escalate political mis- and disinformation, and 64% of US consumers feel that those types of content are most widespread on social media.
Beyond the many political and ethical concerns these problems raise, advertisers must understand the spread of hate speech and mis- and disinformation on social media because of the significant brand safety threats it poses. And because social platforms are entrusted with advertisers’ dollars—indeed, those dollars make up their biggest source of revenue—advertisers are likely interested in how these companies are working to protect them from emerging threats.
If advertisers, researchers, and social media users alike are concerned about the spread of hate speech and mis- and disinformation on social media, social platforms must be invested in mitigating those problems, right?
Well…kind of.
On the heels of a rough couple of years for tech companies, during which several popular social platforms missed revenue expectations and saw their stocks plummet, many of the teams and projects those companies set up to enhance trust, safety, and ethics on their platforms were shuttered or dramatically reduced between late 2022 and early 2023. Meta shut down a fact-checking tool designed to combat misinformation and laid off hundreds of content moderators and other positions related to trust, integrity, and responsibility. X laid off its entire ethical AI team, save one person, at the end of 2022, as well as 15% of its trust and safety department. In December 2023, the media advocacy group Free Press found that Meta, X, and YouTube had collectively removed 17 policies that safeguarded against hate and disinformation on their platforms.
In 2024, even after a strong Q2, Meta shut down CrowdTangle, a research tool that researchers, journalists, and civil society groups used to track and understand how information is disseminated on Facebook and Instagram. While Meta replaced CrowdTangle with what it calls the Meta Content Library, this new set of tools is more limited than CrowdTangle was, and Meta has restricted access to only a few hundred pre-selected researchers. The fact that social platforms downsized so many of their trust and safety teams and programs just before a presidential election year—during which researchers, technologists, and political scientists forecasted disinformation acting as an unprecedented threat—prompted some advertisers to question whether these platforms are doing enough to address their brand safety concerns.
The trend of social platforms reducing content moderation has continued in 2025, with Meta announcing an end to its third-party fact-checking program in early January. In its place, Meta is implementing an X-inspired feature called Community Notes, which will rely on Facebook, Instagram, and Threads users to report posts they feel are inaccurate or offensive. Meta also updated its Hateful Content guidelines, implementing a more lenient approach that allows content that was previously banned—such as discussion of “women as household objects or property” or “transgender or non-binary people as ‘it.’” These changes were swiftly condemned by human rights organizations, but given Meta’s entrenchment in advertisers’ marketing strategies, it seems unlikely that brands will pull back from spending on its platforms in the way many have with X.
In fact, these changes come with potential upsides for Meta and, in turn, advertisers as well. Because controversial content often garners more engagement, Meta’s move to loosen content moderation—and reinstate allowance of political content—could boost user engagement and time spent on its platforms. However, advertisers should closely monitor developments in the coming months to see whether these positive outcomes materialize, and if they do, whether they outweigh potential downsides, such as alienating certain communities on Facebook, Instagram, and Threads.
Considering these persistent brand safety threats, as well as social networks’ recent disinvestment in their trust and safety teams and programs, how can advertisers protect their brands or clients from brand safety threats on social platforms? While there’s no perfect way to avoid serving ads near misinformation and hate speech on social media, there are measures advertising teams can take to minimize risk.
First, despite recent cutbacks, most major players in the social space do still have policies and programs designed to reduce the amount of inaccurate and hateful content on their platforms. For example, in addition to content moderation by users, Meta and X employ AI-led content moderation (a tactic also used by TikTok and Snap).
Major social platforms also offer an array of brand safety tools and controls that advertisers can tap into. Before its January announcements around updating its content moderation systems and hateful content guidelines, Meta released a new set of brand safety controls, including a feature that allows advertisers to mute comments on specific ads before they’re published.
To further safeguard brand safety, advertisers can work with partners like DoubleVerify, which offers pre-screen protection capabilities that help to ensure ads are served in safe and suitable environments. They can also leverage allow lists and block lists to better control the environments in which their ads are served.
Continuous social media monitoring—done by teams who are trained to detect mis- and disinformation—is another important way to safeguard brand content on social media. Advertisers can even harness the power of AI for good in this area, with AI-driven social listening tools that make it easy to monitor and keep track of online conversations involving specific brands.
And, because the threat is so prevalent, marketing leaders should ensure their teams have a plan of action in case their brand’s or client’s ads appear next to harmful content on social. This is a key step, given that brands can regain some favorability with consumers when they denounce misinformation.
In the face of pressing concerns over misinformation, disinformation, and hate speech on social media, many advertising leaders will want to stay vigilant about brand safety when advertising on social platforms. As generative AI continues to evolve and content moderation on social platforms is reduced, the spread of harmful content will only grow, amplifying risks for both brands and consumers alike. For marketers, it’s key to not only monitor these challenges, but also to take proactive steps to safeguard brand integrity.
—
Curious to learn more about how leading marketers and advertisers across the US feel about AI? Check out our report, AI and the Future of Marketing, to see how agencies and brands are thinking about and using the technology, as well as how they feel about the ways it is shaping the industry.
AI remains the pivotal topic of conversation across the world of business—from Wall Street, to board rooms, to sales pitches, to paid media.
In the advertising world, artificial intelligence has already been at work for over a decade, powering programmatic advertising and optimizing media buying across the open internet. Now, recent developments in the realm of generative AI are revolutionizing the landscape even further. Given the Trump Administration’s pro-AI position and recent private sector investments of up to $500 billion in AI-related infrastructure, the next few years are poised to deliver continued innovation and widespread adoption of the technology.
As agencies and brands navigate these new opportunities, their leaders must balance two directives: First, embracing AI tools to increase efficiencies, grow revenue, and stay at the cutting edge of innovation. And second, protecting their businesses from the risks that come along with these tools. It’s a fine line to tread, but leading organizations are finding ways to approach these new technologies so that they benefit their businesses and bottom lines while minimizing liabilities.
To do this, advertisers must thoroughly understand the risks posed by AI. The most significant ones fall into three main categories: brand safety concerns tied to gen AI-created misinformation, considerations around how AI-generated advertising will land with a consumer base that’s largely wary of AI, and potential legal risks to agencies and brands related to data privacy and deceptive advertising practices.
Industry leaders must grow increasingly knowledgeable on these topics and develop best practices, processes, and skillsets across their teams to ensure any forays into new AI-driven advertising tools are safeguarded against risk.
AI offers many promising benefits for advertisers, from cost efficiency to speed to ease of launch. However, these advantages come with some significant brand safety concerns. It’s important for advertisers to understand these threats, implement safeguards around their use of AI, and stay up to date on this quickly developing landscape in order to make the most of these tools and solutions without opening themselves up to consumer backlash and wasted spend.
Generative AI is one of the biggest drivers of brand safety concerns today, with 100% of industry professionals believing the technology poses a brand safety and misinformation risk to marketers and advertisers, and 88.7% calling the risk a moderate to significant one. Gen AI technology is not perfect, and these tools have regularly demonstrated a tendency to produce content that’s, at one end of the spectrum, low-quality and likely ineffective for advertising, and, on the other end of the spectrum, inaccurate or offensive.
Two particular areas of concern include generative AI’s tendency to make up false information (a flaw known as AI hallucinations) and indications of biases in AI-generated content (due to large language models relying on human inputs and human-generated content, which often contain biases).
These concerns have been on full display in recent years. In 2024, for example, Google had to suspend the image-generating capabilities of its Gemini chatbot, which is integrated into Google’s advertising tools, after it produced historically inaccurate images—specifically, images of “multi-ethnic Nazis and non-white U.S. Founding Fathers.” The controversy demonstrates how developers are still learning how to program these technologies to effectively avoid bias: Gemini was programmed to avoid racial and ethnic bias, which, ironically, backfired when the images in question ended up being inaccurate.
Of course, this doesn’t mean that advertisers should forgo the efficiencies offered by generative AI. However, it’s critical that teams understand the risks and put proper safeguards in place to minimize their likelihood.
“If teams are thoughtful in reviewing the outputs, then using AI to repurpose existing creative or develop elements of media assets should be fine,” says Molly Marshall, Client Strategy and Insights Partner at Basis. “But AI can’t currently replicate the creative process in terms of identifying a strong insight and developing creative that meaningfully relates to a target consumer, so AI-generated creative should complement and iterate upon an existing strategy, not wholly develop it.”
Generative AI has also prompted some headaches for brands that have started using AI-powered chatbots to streamline and personalize customer service on their websites. The technology promises to transform the customer service industry—however, upon testing chatbots offered by TurboTax and H&R Block, reports found that the chatbots offered inaccurate information at least half of the time.
“Chatbots offer brands a big opportunity to streamline communication with customers, especially as brick-and-mortar stores close and more customer service is going virtual,” says Marshall. “But the potential damage from chatbots that share inaccurate information may outweigh those benefits for some brands.”
Advertisers must also prepare for the growing presence of generative AI in online content. AI-generated material is becoming increasingly common—for example, the amount of AI content in the top 20 Google search results jumped from just 5.6% when ChatGPT was first released in 2022 to more than 19% in early 2025.
Generative AI has also made it easier for bad actors to create made-for-advertising sites (MFAs) filled with low-quality content, misinformation-filled pages strategically developed around key search terms, and other content that could pose significant risks to brands that run ads alongside it. This risk is amplified by the new administration’s lighter regulatory approach—particularly its executive order that “revokes certain existing AI policies and directives that act as barriers to American AI innovation.” Though this deregulatory stance may create space for more innovation, it may also make it easier for those with malicious intent to flood the internet with low-quality, AI-generated, mis- and disinformation-filled content. As a result, advertisers will need to be more deliberate around their ad spend and put new guardrails in place to avoid waste as well as risky (if not downright harmful) ad placements.
Programmatic advertisers, in particular, will need to seek out solutions that help steer their dollars away from MFA sites and other brand unsafe environments, as research has found that 15% of their budgets are spent on MFAs. “Advertisers must be able to react in real-time to block misleading sites and keywords,” says Marshall, and should embrace technological solutions like MFA block lists to help minimize the risk.
These concerns have been compounded by the recent trend of platforms rolling back their content moderation efforts. For instance, Meta recently removed its fact-checking program in lieu of a “Community Notes” approach that sources content moderation to users, as well as updated their Hateful Content guidelines, allowing users to share controversial and/or harmful content that was previously banned. This pullback of content moderation, coupled with the proliferation of AI-generated content that can be low-quality if not blatantly incorrect or harmful, makes it critical for brands and agencies to develop strong brand safety frameworks and to prioritize partnerships with premium, trusted publishers. Agencies and brands may also eventually need to develop teams focused on dealing with misinformation and disinformation to protect their spend.
Advertisers must also balance their own enthusiasm around AI with a consumer base that isn’t quite so excited. While nearly 77% of industry professionals believe that generative AI will have a positive impact on marketing and advertising, the majority of consumers don’t trust the technology: A 2024 report from the Edelman Trust Institute found that US consumer trust in artificial intelligence has fallen by 15% in the last five years, from 50% to 35%. And when it comes to the use of AI in advertising, nearly two-thirds of US adults say they are either somewhat or very uncomfortable with AI-generated ads.
These opinions don’t necessarily mean that advertisers should stop embracing the AI-led tools that work for them—especially considering that AI has effectively driven behind-the-scenes advertising features such as machine learning, algorithmic optimization, bid multipliers, and group budget optimization for some time now.
What it does mean is that leaders need to be cognizant of consumer sentiment toward AI, and to act accordingly. This could include informing consumers about how AI is used in a client or stakeholder’s marketing efforts, via a social media post or a dedicated page on their website. Brands may also opt to disclose when an ad or content is generated by AI, as adding disclosures can lead to a 47% increase in the appeal of those ads, a 73% increase in the trustworthiness of those ads, and a 96% jump in trust for the brands behind them.
Data privacy is also top of mind for consumers, with 71% of US consumers worrying that their digital activities put them at risk for security incidents. And, 81% of consumers who have heard of AI feel that companies will use the technology to collect and analyze their personal information in ways people aren’t comfortable with. Organizations can gain consumers’ trust by offering transparency around how they safeguard their customers’ data, and by prioritizing partnerships with privacy-focused organizations or gaining voluntary certifications like SOC 2 compliance that indicate a commitment to data security and ethical data practices.
Leaders who prioritize this type of transparency can develop stronger, more trust-based relationships with their consumer base—which may provide a key competitive edge in a competitive environment.
Finally, there are a variety of legal concerns advertising leaders must account for as they adopt new AI tools. Artificial intelligence has advanced more quickly than legislators can keep up with it, but there are a variety of regulations that have been introduced in the US and beyond that aim to mitigate the threats posed by AI. At the same time, advertisers must ensure compliance with existing legislation to avoid hefty fines and other legal consequences.
As advertisers grapple with widespread signal loss, AI has emerged as a powerful tool for enabling privacy-friendly personalized marketing.
AI can enable lookalike and predictive audiences based on first-party data, and generate a variety of data-based insights to help advertisers better understand their audience and their consumers’ path to purchase. Many advertisers are embracing these tools as a way to make up for the loss of cookies and other factors impacting signal loss.
At the same time, AI technologies can pose some data privacy-related risks. Many AI-powered advertising solutions use personal data to fuel their machine learning algorithms, and depending on the tool itself, there’s some ambiguity around where exactly all that data comes from, where it’s stored, and who can access it. What’s more, some artificial intelligence tools leverage the data they collect to deduce sensitive personal information such as location, health information, and political or religious views.
To ensure the ethical use of consumer data and to protect their businesses from legal consequences, advertising organizations must thoroughly vet any data-focused vendors or tools to ensure their data gathering, processing, analyzing, and storage systems comply with digital advertising regulations—and, of course, ensure their own data systems comply as well. Leaders must also stay on top of new AI- and data privacy-related regulations as they take hold, even if this is an area that might see less regulatory activity under the Trump Administration.
Another area of legal concern for advertisers has to do with the Federal Trade Commission (FTC), which is responsible for safeguarding US consumers from unfair or deceptive advertising practices.
One such practice relates to the use of dark patterns, or design techniques that can manipulate consumers into purchasing an item or service or providing personal data—and which can be created and enhanced with AI. “Identify[ing] and “crack[ing] down on businesses that deploy deceptive and unlawful dark patterns” has been a focus of the FTC’s for many years. On the state level, the Colorado Privacy Act and the California Privacy Rights Act (CPRA) have also outlined regulations around dark patterns in advertising.
Though the new chairman of the FTC appointed by President Trump, Andrew Ferguson, could very well take a lighter regulatory approach to AI than the prior chairwoman, Lina Kahn, advertisers should remain cautious. Even with the potential for a more lenient stance on AI oversight, the FTC’s core mission to protect consumers from misleading claims and/or harmful practices remains unchanged.
Lastly, advertisers must pay close attention to any ownership- and copyright-related legal concerns around AI-generated content.
In January 2025, the US Copyright Office released a report on the legal and policy issues related to AI and copyright. This report concludes that “the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements.” The key phrase here is "sufficient expressive elements," which suggests that merely pressing a button to create AI-generated content isn’t enough—there must be human involvement in curating, editing, or refining the work in a way that demonstrates original authorship. Without that kind of human involvement in the creation process, AI-generated content might not qualify for copyright protection.
At the same time, some ambiguity remains around what exactly constitutes “sufficient expressive elements,” and this will likely be determined on a case-by-case basis. As such, advertising teams must establish and adhere to strong creative processes with clear documentation of how AI is being used to develop assets—particularly those they might want to copyright. Advertising leaders should also stay on top of any further developments in this area to ensure compliance as more legislators and regulators refine rules around the ownership of AI-generated works. Enlisting a solid legal counsel or team will be key to navigating the complexity of this arena.
By investing the time in advancing their teams’ AI knowledge and skillsets now, leaders will set their organizations up for success as the technology becomes increasingly prevalent throughout digital advertising. The sooner advertisers learn how to implement and take advantage of these tools in a discerning and ethical way, the greater their competitive edge will be over those who procrastinate.
—
Want to learn more about how advertisers are approaching AI? We surveyed marketing and advertising professionals from top agencies, brands, non-profits, and publishers to better understand advertiser sentiments around the technology, as well as how they’re leveraging AI-driven tools in their work. Check out the top takeaways in our report, AI and the Future of Marketing.