Social media is facing yet another moment of crisis.
A recent report by the non-profit watchdog Tech Transparency Project found that Facebook is still running ads against searches for dozens of white supremacist groups, including the Ku Klux Klan, and hosting more than 100 pages and groups associated with white supremacist organizations—this despite purportedly banning such groups from the platform.
Of course, social media platforms like Facebook are no strangers to controversy, particularly when it comes to the two-headed monster that is misinformation and hate speech. Take last year, when staggering new details on Facebook’s unwillingness (or, perhaps, its inability) to effectively patrol hate speech and misinformation arose in a series of articles in the Wall Street Journal and a “60 Minutes” interview with the whistleblower whose leaks inspired those reports. These revelations, and the subsequent public outcry—not to mention ongoing congressional hearings and antitrust investigations—placed Facebook and all social media platforms under unprecedented scrutiny that has not abated since.
Whether it’s algorithmic-fueled misinformation campaigns on Twitter (and Instagram and Facebook and elsewhere), YouTube leading people down misinformation rabbit holes and radicalizing viewers, or hate speech and misinformation proliferating on TikTok, it seems like hardly a week goes by without some new social media scandal rearing its ugly head.
Historically, many (if not most) of these controversies have swirled around Meta and, more specifically, Facebook. But there’s a new political misinformation threat in town: TikTok, which has recently fueled the spread of false info during elections around the world—from Germany to the Philippines and, researchers now warn, the United States. The mythical algorithm that's powered the app to the top of the download charts and captured a massive global audience (particularly among younger users) has had trouble curbing misinformation on the platform, struggling with the increased difficulty that comes with moderating short-form video and audio.
While TikTok boasts of its efforts to combat misinformation—pointing to its removal of 350,000 videos in 2020 that included election misinformation, disinformation, and manipulated media and a separate service that stops political deepfakes from spreading on the platform—the company’s recent struggles with policing disinformation around Covid-19 and Kenyan elections and are increasing fears that the app will be a hotbed of misinformation during the 2022 US Midterms. All of this while ad revenues on the platform continue to soar and user growth shows no signs of slowing.
Clearly, misinformation and hate speech have long been a problem on these platforms, but the issue appears to have reached a tipping point. In the past, when confronted with damning evidence of hate speech and misinformation running rampant on their platforms, companies like Facebook have responded with carefully crafted apologies, PR savvy, and other half-measures. Indeed, Facebook has now had so many controversies that it has reportedly abandoned apologies altogether and, in their stead, now offers defiance in the face of calamity. But for whatever reason—be it cost, politics, indifference, or otherwise—all of these companies have failed to adequately reign in these dual threats.
The question, then, for marketers and advertisers is, “What are we to do about this?” Because at this stage, with social media such a critical part of any brand’s marketing mix, the only thing we know we cannot do is nothing.
Just how critical is social media to advertisers? US social network ad spending is projected to grow by 26.9% this year to $58.66 billion, accounting for more than one in every five US ad dollars. By 2023, that spend will grow to nearly $80 billion. Social media platforms are some of the highest-netting companies in the world when it comes to digital ad revenue, with Facebook taking in $107.72 billion globally in 2021, YouTube taking in $13.19 billion, and LinkedIn and Twitter’s ad revenue each topping $4 billion. Social networks offer unparalleled targeting capabilities thanks to their vast troves of user data. What’s more, brands can forge a different kind of relationship with their customers on social media than they do on other digital platforms, connecting in a more...well, “social” setting.
Fueling this spend, of course, is social media’s remarkable audience and reach. On average, US adults spend over an hour and a half per day on social networks, per eMarketer, and nearly three-quarters of internet users visit social networks at least once per month. “From a marketing reach perspective,” the report notes, “there are few digital activities in which to find that many people,” so of course advertisers continue to flock toward these platforms to reach their target audiences.
Social networks aren’t going anywhere, and neither are social media advertisers. But in light of these seemingly-constant revelations about the links between social media, misinformation and hate speech, what do those ads ultimately wind up saying about your company and its values—especially from a brand safety perspective?
Fears around brand safety and brand suitability are still very much top-of-mind for advertisers, particularly when it comes to social media. According to a 2021 Advertiser Perceptions Trust Report, 82% of advertisers say they apply corporate responsibility and brand values to media spending decisions—up 20% from just last year—and more than half of advertisers (54%) say they will change how and where they spend media budgets to defund disinformation. Meanwhile, more than three-quarters (79%) of advertisers say platforms should be held responsible for harmful content on their site—even when it’s been posted by users.
The data shows that brands are right to be concerned when it comes to consumer sentiment about social media and misinformation, trust, and safety. In a 2020 survey by the Brand Safety Institute, 87% of respondents said it is very or somewhat important for advertisers to make sure their ads don't appear near dangerous, offensive, or inappropriate content, and 74% of consumers said they are strongly opposed to brands running ads near hate speech. That same survey found that “an overwhelming majority” of consumers said they would reduce their spending on a product they regularly buy that appeared near offensive, illegal, or dangerous content. Misinformation is just as toxic: according to IAB, 55% of consumers are less likely to purchase from a brand that advertises alongside fake news, while 82% say it’s important to them that a brand’s ads appear next to content that is safe, accurate and trustworthy.
If social networks are flooded with these types of dangerous of content, it’s possibly no coincidence that social media growth, as an industry, has hit a wall. User growth is stagnant, and average daily usage has actually dipped since last year’s pandemic-fueled heights. In time, consumers may come to strongly associate social networks with misinformation and hate speech—that is, if they don’t already.
In light of these persistent brand safety/brand suitability threats, without significant changes from social networks, how comfortable can any brand feel placing ads on such a platform? And what, exactly, can advertisers do about it?
The threat of misinformation and hate speech is more than just a blip, or a PR crisis to be “dealt with” and then swept away. It is not some cold we can sleep off in a few days. It is a pandemic. Both have permeated every single facet of social media, and if we do not address them as an industry—perhaps the only non-regulatory industry that has the power to affect real change in this sector—then we run the risk of losing the entire social media infrastructure that has been such a boon to digital advertisers over the last decade.
Today, social is a genuine hub of community and commerce, and any blow to its credibility is also a blow to the credibility of any brand that has aligned itself with those platforms. Losing an audience’s trust on social media means losing that audience’s trust everywhere. Rehabilitating social media is not just in the best interest of those platforms, then, but for all of us—brands, advertisers and users alike.
Right now, companies have a once-in-a-generation opportunity to take a stand and show their customers what real values and leadership look like. Authenticity in today’s corporate environment requires more than just slogans and graphics—it demands action, and this issue is primed for true leadership. The bare minimum, of course, is monitoring. Unsavory though it might be, brands should regularly be sifting through all of the comments and posts on their company’s social pages to ensure they are free from hate speech and misinformation, deleting and reporting any violations they might find.
Others, however, may choose to go even further: a full pause on social ad spending until they see platforms commit to making real change on these issues. The industry has already seen a test case on such matters that led to real results. The 2020 “#StopHateForProfit” boycott of Facebook and Instagram by led by civil rights groups and a collection of major brands—including Pfizer, Best Buy, Ford, Adidas and Starbucks—led to notable changes at the social media giant, including the hiring of civil rights leaders to evaluate discrimination and bias, as well as a crackdown on extremism in both public and private groups.
With this type of concerted effort from an even larger group of companies, aimed at instigating wholesale modifications to the way social networks monitor and respond to misinformation and hate speech, advertisers could potentially lead the charge and save social media from turning into a permanently dangerous medium. A short-term shifting of ad budgets could prevent the long-term loss of audience, reputation and revenue.
In the end, it will be up to individual organizations to decide whether something like a temporary halt to social media ads makes sense as a part of their brand and advertising strategy. It may not be right for every brand, and some may understandably want to tackle this issue and express their concerns by other means. But for those brands that care about fighting misinformation and hate speech, and want to demonstrate those values with more than just words, now may be the perfect opportunity to show their followers—both on social media and in the real world—just what it means to take the lead.