What Can Advertisers Do About Misinformation and Hate Speech on Social Media? - Basis Technologies
Mar 11 2025
Clare McKinley

What Can Advertisers Do About Misinformation and Hate Speech on Social Media?

Share:

As generative AI and content moderation rollbacks transform the social media landscape, advertisers must navigate a new era of brand safety challenges.

Consumers, advertisers, and regulators alike have long voiced concerns around the spread of misinformation and hate speech on social media platforms. In 2024, global information experts ranked social media owners among the top threats to a trustworthy online news environment—just the latest in a long history of criticism over their inability to mitigate (or disinterest in mitigating) the proliferation of harmful content.

Researchers have found that social media algorithms can amplify hatedivisive content, and misinformation, in part because these algorithms are designed to showcase posts that will receive high levels of engagement, and inflammatory content often garners lots of comments and clicks. These concerns have hit new levels of urgency in recent years with the rise of generative AI, which can be used to create deepfakes and other forms of disinformation at greater scale and lower cost, making it easier than ever for bad actors to craft disinformation campaigns.

At the same time, the biggest players in the social media space have recently revamped and rolled back their systems for moderating content, with critics worrying the changes will make it even easier for hate speech and misinformation to proliferate on those platforms.

The spread of hate speech and mis- and disinformation on social media is everyone’s problem—from the social platforms themselves, to the consumers who spend nearly two and a half hours a day with them, to the advertisers who will spend over $100 billion on them this year. Because social media is such a critical part of any brand’s marketing mix, and with these problems likely to intensify as AI evolves and content moderation is reduced, advertising leaders must strategize to protect their brands/clients and consumers in this new era of brand safety.

The Importance of Consumer Trust and Brand Safety

In tandem with concerns around the spread of hate speech and misinformation on social media, advertisers have grown increasingly worried about brand safety and brand suitability, naming it their top programmatic advertising concern while ranking paid social as the channel with the highest brand safety risk.

The emergence of AI has only heightened those fears, with one recent survey finding an astonishing 100% of marketers agreeing that generative AI poses a brand safety and misinformation risk to their industry, and 88.7% calling the threat moderate to significant.

Advertising professionals are right to feel concerned, with over 80% of consumers saying it’s important to them that the content surrounding ads is appropriate, and three-quarters saying they feel less favorable towards brands who advertise on sites that spread misinformation. Even more, 89% of Americans say they feel that social media companies should implement stricter policies to curb the spread of misinformation on their platforms. Those social media companies, however, have a long history of failing to do so.

The Rise of AI-Driven Hate Speech and Mis- and Disinformation on Social Media

Social platforms have been in the spotlight because of their penchant for amplifying hateful and inaccurate content for a while now. Back in 2016, a Buzzfeed editor discovered a cluster of fake news sites registered in Veles, Macedonia, which spread false stories that circulated widely on Facebook. These articles, which were run for profit via Facebook ads, gained massive traction on social media during the US presidential election due to their sensationalism, with headlines like “Pope Francis Shocks World, Endorses Donald Trump for President.”

This marked the beginning of the public’s understanding of “fake news” and its circulation on social media. Fast-forward to 2022, and Meta, Twitter (now X), TikTok, and YouTube were under investigation by the US Senate Homeland Security Committee, which found that the social media companies’ business models amplified “dangerous and radicalizing extremist content, including white supremacist and anti-government content.”

Around the same time, a NewsGuard investigation explored the dissemination of misinformation on TikTok. Researchers found that when they searched keywords related to important news topics such as COVID-19 and Russia’s invasion of Ukraine, almost 20% of the search results contained misinformation. This is especially worrisome today, given that about four in 10 young adults in the US say they regularly get their news from TikTok.

While the amount of misinformation on social media was alarming back in 2022, it’s only grown more so in the years since as generative AI has risen in prominence. Today, generative AI tools equip users with the ability to quickly create convincing fake photos, videos, and audio clips—tasks that, just a few years ago, would have taken entire teams of people as well as time, technical skill, and money. Now, over half of consumers are worried that AI will escalate political mis- and disinformation, and 64% of US consumers feel that those types of content are most widespread on social media.

Beyond the many political and ethical concerns these problems raise, advertisers must understand the spread of hate speech and mis- and disinformation on social media because of the significant brand safety threats it poses. And because social platforms are entrusted with advertisers’ dollars—indeed, those dollars make up their biggest source of revenue—advertisers are likely interested in how these companies are working to protect them from emerging threats.

Social Platforms Downsize Their Trust and Safety Teams

If advertisers, researchers, and social media users alike are concerned about the spread of hate speech and mis- and disinformation on social media, social platforms must be invested in mitigating those problems, right?

Well…kind of.

On the heels of a rough couple of years for tech companies, during which several popular social platforms missed revenue expectations and saw their stocks plummet, many of the teams and projects those companies set up to enhance trust, safety, and ethics on their platforms were shuttered or dramatically reduced between late 2022 and early 2023. Meta shut down a fact-checking tool designed to combat misinformation and laid off hundreds of content moderators and other positions related to trust, integrity, and responsibility. X laid off its entire ethical AI team, save one person, at the end of 2022, as well as 15% of its trust and safety department. In December 2023, the media advocacy group Free Press found that Meta, X, and YouTube had collectively removed 17 policies that safeguarded against hate and disinformation on their platforms.

In 2024, even after a strong Q2Meta shut down CrowdTangle, a research tool that researchers, journalists, and civil society groups used to track and understand how information is disseminated on Facebook and Instagram. While Meta replaced CrowdTangle with what it calls the Meta Content Library, this new set of tools is more limited than CrowdTangle was, and Meta has restricted access to only a few hundred pre-selected researchers. The fact that social platforms downsized so many of their trust and safety teams and programs just before a presidential election year—during which researchers, technologists, and political scientists forecasted disinformation acting as an unprecedented threatprompted some advertisers to question whether these platforms are doing enough to address their brand safety concerns.

The trend of social platforms reducing content moderation has continued in 2025, with Meta announcing an end to its third-party fact-checking program in early January. In its place, Meta is implementing an X-inspired feature called Community Notes, which will rely on Facebook, Instagram, and Threads users to report posts they feel are inaccurate or offensive. Meta also updated its Hateful Content guidelines, implementing a more lenient approach that allows content that was previously banned—such as discussion of “women as household objects or property” or “transgender or non-binary people as ‘it.’” These changes were swiftly condemned by human rights organizations, but given Meta’s entrenchment in advertisers’ marketing strategies, it seems unlikely that brands will pull back from spending on its platforms in the way many have with X.

In fact, these changes come with potential upsides for Meta and, in turn, advertisers as well. Because controversial content often garners more engagement, Meta’s move to loosen content moderation—and reinstate allowance of political content—could boost user engagement and time spent on its platforms. However, advertisers should closely monitor developments in the coming months to see whether these positive outcomes materialize, and if they do, whether they outweigh potential downsides, such as alienating certain communities on Facebook, Instagram, and Threads.

What Advertisers Can Do

Protecting Brands/Clients

Considering these persistent brand safety threats, as well as social networks’ recent disinvestment in their trust and safety teams and programs, how can advertisers protect their brands or clients from brand safety threats on social platforms? While there’s no perfect way to avoid serving ads near misinformation and hate speech on social media, there are measures advertising teams can take to minimize risk.

First, despite recent cutbacks, most major players in the social space do still have policies and programs designed to reduce the amount of inaccurate and hateful content on their platforms. For example, in addition to content moderation by users, Meta and X employ AI-led content moderation (a tactic also used by TikTok and Snap).

Major social platforms also offer an array of brand safety tools and controls that advertisers can tap into. Before its January announcements around updating its content moderation systems and hateful content guidelines, Meta released a new set of brand safety controls, including a feature that allows advertisers to mute comments on specific ads before they’re published.

To further safeguard brand safety, advertisers can work with partners like DoubleVerify, which offers pre-screen protection capabilities that help to ensure ads are served in safe and suitable environments. They can also leverage allow lists and block lists to better control the environments in which their ads are served.

Continuous social media monitoring—done by teams who are trained to detect mis- and disinformation—is another important way to safeguard brand content on social media. Advertisers can even harness the power of AI for good in this area, with AI-driven social listening tools that make it easy to monitor and keep track of online conversations involving specific brands.

And, because the threat is so prevalent, marketing leaders should ensure their teams have a plan of action in case their brand’s or client’s ads appear next to harmful content on social. This is a key step, given that brands can regain some favorability with consumers when they denounce misinformation.

Looking Ahead: Social Media, AI, and the Future of Marketing

In the face of pressing concerns over misinformation, disinformation, and hate speech on social media, many advertising leaders will want to stay vigilant about brand safety when advertising on social platforms. As generative AI continues to evolve and content moderation on social platforms is reduced, the spread of harmful content will only grow, amplifying risks for both brands and consumers alike. For marketers, it’s key to not only monitor these challenges, but also to take proactive steps to safeguard brand integrity.

Curious to learn more about how leading marketers and advertisers across the US feel about AI? Check out our report, AI and the Future of Marketing, to see how agencies and brands are thinking about and using the technology, as well as how they feel about the ways it is shaping the industry.

Get the Report