Oct 4 2024
Clare McKinley

What Can Advertisers Do About Misinformation and Hate Speech on Social Media?

Share:

Social media is making headlines again, and for all the wrong reasons.

According to a new report, experts from around the world rank “owners of social media platforms” as the number one threat to a healthy global information environment—the most important feature of which, according to those same experts, is the availability of accurate information.

This is just the latest in a long, long line of headlines condemning social media companies’ inability to mitigate (or disinterest in mitigating) the proliferation of misinformation, disinformation, and hate speech on their platforms. Researchers have found that social media algorithms actually amplify hate, divisive content, and misinformation, in part because these algorithms are designed to showcase posts that will receive high levels of engagement, and inflammatory content often garners lots of comments and clicks.

These concerns have hit new levels of urgency in the past year with the rise of generative AI, which can be used to create deepfakes and other forms of disinformation at a greater scale and lower cost, making it easier than ever for bad actors to craft disinformation campaigns. This issue is of particular concern in the political sphere: One report found that from 2022 to 2023, generative AI was used in at least 16 countries across the globe to “sow doubt, smear opponents, or influence public debate.”

The spread of hate speech and mis- and disinformation on social media is everyone’s problem—from the social platforms themselves, to the consumers who spend over two hours a day with them, to the advertisers who will spend an estimated $96.62 billion on them next year. Because social media is such a critical part of any brand’s marketing mix, and with these problems likely to intensify as AI continues to evolve, advertising leaders must monitor the issue and take action to protect their brands/clients and consumers.

The Rise of AI-Driven Hate Speech and Mis- and Disinformation on Social Media

Social platforms have been in the spotlight because of their penchant for amplifying hateful and inaccurate content for a while now. Back in 2016, a Buzzfeed editor discovered a cluster of fake news sites registered in Veles, Macedonia, which spread false stories that circulated widely on Facebook. These articles, which were run for profit via Facebook ads, gained massive traction on social media during the US presidential election due to their sensationalism, with headlines like “Pope Francis Shocks World, Endorses Donald Trump for President.”

This marked the beginning of the public’s understanding of “fake news” and its circulation on social media. Fast-forward to 2022, and Meta, Twitter (now X), TikTok, and YouTube were under investigation by the US Senate Homeland Security Committee, which found that the social media companies’ business models amplified “dangerous and radicalizing extremist content, including white supremacist and anti-government content.”

Around the same time, a NewsGuard investigation explored the dissemination of misinformation on TikTok. Researchers found that when they searched keywords related to important news topics such as COVID-19 and Russia’s invasion of Ukraine, almost 20% of the search results contained misinformation. This is especially worrisome in 2024, given that about four in 10 young adults in the US say they regularly get their news from TikTok.

While the amount of misinformation on social media was alarming back in 2022, it’s only grown more so in the years since as generative AI has risen in prominence. Today, genAI tools equip users with the ability to quickly create convincing fake photos, videos, and audio clips—tasks that, just a few years ago, would have taken entire teams of people as well as time, technical skill, and money. Now, over half of consumers are worried that AI will escalate political mis- and disinformation, and 64% of US consumers feel that those types of content are most widespread on social media.

Beyond the many political and ethical concerns these problems raise, advertisers must understand the spread of hate speech and mis- and disinformation on social media because of the significant brand safety threats it poses. And because social platforms are entrusted with advertisers’ dollars—indeed, those dollars make up their biggest source of revenue—advertisers are likely interested in how these companies are working to protect them from emerging threats.

Social Platforms Downsize Their Trust and Safety Teams

If government regulators, researchers, and social media users alike are concerned about the spread of hate speech and mis- and disinformation on social media, social platforms must be invested in mitigating those problems, right?

Well…kind of.

On the heels of a rough couple of years for tech companies, during which several popular social platforms missed revenue expectations and saw their stocks plummet, many of the teams and projects those companies set up to enhance trust, safety, and ethics on their platforms were shuttered or dramatically reduced between late 2022 and early 2023. Meta shut down a fact-checking tool designed to combat misinformation and laid off hundreds of content moderators and other positions related to trust, integrity, and responsibility. X laid off its entire ethical AI team, save one person, at the end of 2022, as well as 15% of its trust and safety department. In December 2023, the media advocacy group Free Press found that Meta, X, and YouTube had collectively removed 17 policies that safeguarded against hate and disinformation on their platforms.

And in 2024, even after a strong Q2, Meta shut down CrowdTangle, a research tool that researchers, journalists, and civil society groups used to track and understand how information is disseminated on Facebook and Instagram. While Meta has replaced CrowdTangle with what it calls the Meta Content Library, this new set of tools is more limited than CrowdTangle was, and Meta has restricted access to only a few hundred pre-selected researchers. The fact that social platforms downsized so many of their trust and safety teams and programs just before a presidential election year—during which researchers, technologists, and political scientists forecasted disinformation acting as an unprecedented threat—has prompted some advertisers to question whether these platforms are doing enough to address their brand safety concerns.

Despite the significant cutbacks, most major players in the social space—Meta, TikTok, YouTube, and the like—do still have some policies and programs designed to reduce the amount of inaccurate and hateful content on their platforms. For example, both Meta and TikTok partner with fact-checkers to review posts for inaccuracies (X, meanwhile, has no such program, and CEO Elon Musk himself created at least 50 inaccurate or misleading posts regarding US elections in just the first half of 2024). The question is whether these programs are effective enough at stemming the spread of harmful content. The public doesn’t appear to think so, given that 89% of Americans feel that social media companies should implement stricter policies to curb the spread of misinformation on their platforms, and that leaves advertisers with some serious brand safety conundrums.

The Importance of Consumer Trust and Brand Safety

In recent years, concerns around brand safety and brand suitability have intensified for advertisers in regards to social media. In fact, brand safety is advertisers’ top concern when it comes to programmatic advertising, and marketers rank paid social as the channel with the highest brand safety risk. Given the rise of harmful information on these platforms alongside cuts to social media safety and trust teams, these concerns are entirely reasonable.

Marketers are even more unanimously concerned about the impact of AI on brand safety, with 100% agreeing that generative AI poses a brand safety and misinformation risk for digital marketers, and 88.7% calling the threat moderate to significant.

Advertising professionals are right to feel concerned, given that over 80% of consumers say it’s important to them that the content surrounding ads is appropriate, and three-quarters say they feel less favorable towards brands who advertise on sites that spread misinformation.

Considering these persistent brand safety threats, as well as social networks’ recent disinvestment in their trust and safety teams and programs, how comfortable should brands feel placing ads on social? And what, exactly, can advertisers do about it?

What Advertisers Can Do

Protecting Brands/Clients

While there’s no perfect way to avoid serving ads near misinformation and hate speech on social media, there are measures advertising teams can take to protect their brands and clients.

Advertisers can work with partners like DoubleVerify, for example, which offers pre-screen protection capabilities that help to ensure ads are served in safe and suitable environments. They can also leverage allow lists and block lists to better control the environments in which their ads are served.

Continuous social media monitoring—done by teams who are trained to detect mis- and disinformation—is another important way to safeguard brand content on social media. Advertisers can even harness the power of AI for good in this area, with AI-driven social listening tools that make it easy to monitor and keep track of online conversations involving specific brands.

And, because the threat is so prevalent, marketing leaders should ensure their teams have a plan of action in case their brand’s or client’s ads appear next to harmful content on social. This is a key step, given that brands can regain some favorability with consumers when they denounce misinformation.

Taking a Stand

Unfortunately, safeguards like pre-screen protection and allow and block lists essentially serve as band-aids to temporarily mitigate the brand safety threats posed by the spread of harmful content on social media. As such, some advertisers may be interested in taking action to work towards healthier and safer social media environments for their brands and consumers alike—particularly if they can do so in a way that authentically reflect their brand values.

Advertisers have successfully organized against the spread of harmful content on social media before by boycotting certain platforms. The 2020 “#StopHateForProfit” boycott of Facebook and Instagram—led by civil rights groups and a collection of major brands including Pfizer, Best Buy, Ford, Adidas and Starbucks—brought about notable changes at Meta, including the hiring of civil rights leaders to evaluate discrimination and bias, as well as a crackdown on extremism in both public and private groups on the platforms.

In 2023, advertisers boycotted X after many found that their ads were being served near pro-Nazi content and other posts characterized by hate speech, resulting in $2 billion less in ad revenue that year than previously expected. The boycott continues, with over a quarter of marketers planning to cut back their spending on X next year, as the company works to show advertisers they are seriously considering and addressing their brand safety concerns.

In participating in boycotts like these, organizations can signal to consumers that hate speech and misinformation clash with their brand values, while simultaneously protecting themselves from the brand safety threats lurking on those platforms until their owners have taken satisfactory action to rectify those concerns.

Looking Ahead: Social Media, AI, and the Future of Marketing

In the face of rising concerns over misinformation, disinformation, and hate speech on social media, advertising leaders must stay vigilant about the social platforms on which they advertise. Particularly as generative AI continues to evolve, the spread of harmful content will only grow, amplifying risks for both brands and consumers alike. For marketers, it’s not just about monitoring these challenges, but also taking proactive steps to safeguard brand integrity—and for some, perhaps, even taking a stand.

Curious to learn more about how leading marketers and advertisers across the US feel about AI? Check out our report, AI and the Future of Marketing, to see how agencies and brands are thinking about and using the technology, as well as how they feel about the ways it is shaping the industry.

Get the Report