Apr 3 2024
Clare McKinley

How Advertisers Can Harness AI While Navigating its Risks

Share:

AI remains the pivotal topic of conversation across the world of business—from Wall Street, to board rooms, to sales pitches, to paid media.

In the advertising world, artificial intelligence has already been at work for over a decade, powering programmatic advertising and optimizing media buying across the open internet. Today, new developments from the realm of generative AI (GenAI) are set to revolutionize the landscape even further.

As agencies and CMOs navigate these new opportunities, their leaders must balance two directives: First, embracing AI tools to increase efficiencies, grow revenue, and stay at the cutting edge of innovation. And second, protecting their businesses from the threats that come along with these tools. It’s a fine line to tread, but leading organizations are finding ways to approach these new technologies so that they benefit their businesses and bottom lines while minimizing liabilities.

To do this, advertisers must thoroughly understand the risks posed by AI. The most significant ones fall into three main categories: brand safety concerns tied to GenAI-created misinformation, considerations around how AI-generated advertising will land with a consumer base that’s largely wary of AI, and potential legal risks to agencies and brands related to data privacy and deceptive advertising practices.

Industry leaders must grow increasingly knowledgeable on these topics and develop best practices, processes, and skillsets across their teams to ensure any forays into new AI-driven advertising tools are safeguarded against risk.

AI-Driven Advertising and Brand Safety

AI offers many promising benefits for advertisers, from cost efficiency to speed to ease of launch. However, those advantages also come with some significant brand safety concerns. It’s important for advertisers to understand these threats, implement safeguards around their use of AI, and stay up to date on this quickly developing landscape in order to make the most of these tools and solutions without opening themselves up to consumer backlash and wasted spend.

The Promise—and Risks—of Generative AI

Generative AI is one of the biggest drivers of brand safety concerns, with 99.5% of industry professionals believing that GenAI poses a brand safety and misinformation risk to marketers and advertisers. GenAI technology is not perfect, and these tools have regularly demonstrated a tendency to produce content that’s, at one end of the spectrum, low-quality and likely ineffective for advertising, and, on the other end of the spectrum, inaccurate or offensive.

Two particular areas of concern include generative AI’s tendency to make up false information (also known as “hallucination” in the AI world) and indications of biases in AI-generated content (due to large language models relying on human inputs and human-generated content, which often contain biases).

These concerns were on full display earlier this year when Google had to suspend the image-generating capabilities of its Gemini chatbot, which is integrated into Google’s advertising tools, after it produced historically inaccurate images—specifically, Gemini created images of “multi-ethnic Nazis and non-white U.S. Founding Fathers”. The controversy demonstrates how developers are still learning how to program these technologies to effectively avoid bias: Gemini was programmed to avoid racial and ethnic bias, which, ironically, backfired when the images in question ended up being inaccurate.

Of course, this doesn’t mean that advertisers should forego the efficiencies offered by GenAI. However, it’s critical that teams understand the risks and put proper safeguards in place to minimize their likelihood.

“If teams are thoughtful in reviewing the outputs, then using AI to repurpose existing creative or develop elements of media assets should be fine,” says Molly Marshall, Client Strategy and Insights Partner at Basis Technologies. “But AI can’t currently replicate the creative process in terms of identifying a strong insight and developing creative that meaningfully relates to a target consumer, so AI-generated creative should complement and iterate upon an existing strategy, not wholly develop it.”

Chatbots and Customer Service

Generative AI has also prompted some additional headaches for brands that have started using AI-powered chatbots to streamline and personalize customer service on their websites. The technology promises to transform the customer service industry; however, upon testing chatbots offered by TurboTax and H&R Block, reports found that the chatbots offered inaccurate information “at least half of the time.”

Chatbots offer brands a big opportunity to streamline communication with customers, especially as brick-and-mortar stores close and more customer service is going virtual,” says Marshall. “But the potential damage from chatbots that share inaccurate information may outweigh those benefits for some brands.”

The New AI-Generated Web

Advertisers will also need to prepare for the ways generative AI is infiltrating content across the internet. One report estimates that by 2026, a whopping 90% of online content will be generated by AI.

Generative AI is already making it easier for bad actors to create MFA sites filled with low-quality content, misinformation-filled pages strategically developed around key search terms, and other content that could pose significant risks to brands that run ads alongside it. As a result, advertisers will need to be more deliberate around their ad spend and put new guardrails in place to avoid waste.

Programmatic advertisers, in particular, will need to seek out solutions that help steer their dollars away from MFA sites and other brand unsafe environments. Brands are already spending a reported 15% of their programmatic budgets on made-for-advertising websites (MFAs). Advertisers will need to “react in real-time to block misleading sites and keywords,” says Marshall, and should embrace technological solutions like MFA block lists to help minimize the risk. Agencies and brands may also eventually need to develop teams who work specifically to deal with fake content like misinformation and disinformation, in order to protect their spend: Gartner predicts that by 2027, 80% of marketers will have developed “content authenticity teams” to serve this purpose.

Consumer Resistance to AI in Advertising

Advertisers will also need to balance their own enthusiasm around AI with a consumer base that isn’t quite so excited. While 77% of advertisers have a positive view of AI, the majority of consumers don’t trust the technology: A 2024 report from the Edelman Trust Institute found that US consumer trust in artificial intelligence has fallen by 15% in the last five years, from 50% to 35%. On a global scale, respondents were nearly two times more likely to say that innovations like AI are “poorly managed” by businesses, NGOs, and governments than they were to say that those innovations were well managed—a sentiment shared across income, generation, and age.

These opinions don’t necessarily mean that advertisers should stop embracing those AI-led tools that work for them—especially considering that AI has effectively driven behind-the-scenes advertising features such as machine learning, algorithmic optimization, bid multipliers, and group budget optimization for some time now.

What it does mean is that leaders need to be cognizant of consumer sentiment toward AI, and to act accordingly. This could include informing consumers about how AI is used in a client or stakeholder’s marketing efforts, via a social media post or a dedicated page on their website. Brands may also opt to disclose when an ad or content is generated by AI. While only about half of ads generated by AI are currently identified as such, adding disclosures can lead to a 47% increase in the appeal of those ads, a 73% increase in the trustworthiness of those ads, and a 96% jump in trust for the brands behind them.

Data privacy is also top of mind for consumers, with 68% of global consumers feeling either somewhat or very concerned about their digital privacy, and 57% agreeing that artificial intelligence is a significant threat to their privacy. Organizations can gain consumers’ trust by offering transparency around how they safeguard their customers’ data, and by prioritizing partnerships with privacy-focused organizations or gaining voluntary certifications like SOC 2 compliance that indicate a commitment to data security and ethical data practices.

Leaders who prioritize this type of transparency can develop stronger, more trust-based relationships with their consumer base—which may provide a key competitive edge in a competitive environment.

Legal Concerns Around AI in Advertising

Finally, there are a variety of legal concerns advertising leaders must account for as they adopt new AI tools. Artificial intelligence has advanced more quickly than legislators can keep up with it, but there are a variety of regulations that have been introduced in the US and beyond that aim to mitigate the threats posed by AI. At the same time, advertisers must ensure compliance with existing legislation to avoid hefty fines and other legal consequences.

Data Privacy

As advertisers grapple with the deprecation of third-party cookies in Chrome and wider issues surrounding signal loss, AI has emerged as a powerful tool for enabling privacy-friendly personalized marketing.

AI can enable lookalike and predictive audiences based on first-party data, and generate a variety of data-based insights to help advertisers better understand their audience and their consumers’ path to purchase. Advertisers are already starting to embrace these tools as a way of making up for the loss of cookies and other factors impacting signal loss.

At the same time, AI technologies can pose some data privacy-related risks. Many AI-powered advertising solutions use personal data to fuel their machine learning algorithms, and depending on the tool itself, there’s some ambiguity around where exactly all that data comes from, where it’s stored, and who can access it. What’s more, some artificial intelligence tools leverage the data they collect to deduce sensitive personal data such as location, health information, and political or religious views.

To ensure the ethical use of consumer data and to protect their businesses from legal consequences, advertising organizations must thoroughly vet any data-focused vendors or tools to ensure their data gathering, processing, analyzing, and storage systems comply with digital advertising regulations—and, of course, ensure their own data systems comply as well. Leaders must also stay on top of new AI- and data privacy-related regulations as they take hold, as this is an area that will likely see a lot of regulatory activity in coming years.

Deceptive Advertising

Another area of legal concern for advertisers relates to the Federal Trade Commission (FTC), which is responsible for safeguarding US consumers from unfair or deceptive advertising practices. Last year, Chairwoman Lina Kahn wrote that the commission is paying special attention to AI’s potential to advance unfair and deceptive advertising practices, stating that “Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market.” This sentiment goes along with the FTC’s prior commitment to protecting US consumers from dark patterns, or design techniques that can manipulate consumers into purchasing an item or service or providing personal data (and which can be created and enhanced via AI). On the state level, the Colorado Privacy Act and the California Privacy Rights Act (CPRA) have also outlined regulations around dark patterns in advertising.

Ownership and Copyright

Lastly, advertisers need to pay close attention to any ownership- and copyright-related legal concerns around AI-generated content.

While AI-created content currently cannot be copyrighted, the US Copyright Office has initiated an agency-wide investigation to “delve into a wide range of (copyright-related) issues” created by the popularization of GenAI tools. Leaders will need to stay on top of any developments in this area to ensure compliance as more legislators and regulators refine rules around the ownership of AI-generated works.

Overall, advertising leaders must make it a priority to understand how current regulations apply to AI, and to stay on top of new regulations as they take hold. Enlisting a solid legal counsel or team will be key to navigating the complexity of this arena.

Wrapping Up: How Advertisers Can Harness AI

By investing the time in advancing their teams’ AI-adjacent knowledge and skillsets now, leaders will set their organizations up for success as the technology becomes increasingly prevalent throughout digital advertising. The sooner advertisers learn how to implement and take advantage of these tools in a discerning and ethical way, the greater their competitive edge will be over those who procrastinate.

Want to learn more about how advertisers are approaching GenAI? We surveyed over 200 marketing professionals from top agencies, brands, non-profits, and publishers to better understand advertiser sentiments around GenAI, as well as how they’re leveraging GenAI tools in their work. Check out the top takeaways in our report, Generative AI and the Future of Marketing.

Get the Report