AI remains the pivotal topic of conversation across the world of business—from Wall Street, to board rooms, to sales pitches, to paid media.
In the advertising world, artificial intelligence has already been at work for over a decade, powering programmatic advertising and optimizing media buying across the open internet. Now, recent developments in the realm of generative AI are revolutionizing the landscape even further. Given the Trump Administration’s pro-AI position and recent private sector investments of up to $500 billion in AI-related infrastructure, the next few years are poised to deliver continued innovation and widespread adoption of the technology.
As agencies and brands navigate these new opportunities, their leaders must balance two directives: First, embracing AI tools to increase efficiencies, grow revenue, and stay at the cutting edge of innovation. And second, protecting their businesses from the risks that come along with these tools. It’s a fine line to tread, but leading organizations are finding ways to approach these new technologies so that they benefit their businesses and bottom lines while minimizing liabilities.
To do this, advertisers must thoroughly understand the risks posed by AI. The most significant ones fall into three main categories: brand safety concerns tied to gen AI-created misinformation, considerations around how AI-generated advertising will land with a consumer base that’s largely wary of AI, and potential legal risks to agencies and brands related to data privacy and deceptive advertising practices.
Industry leaders must grow increasingly knowledgeable on these topics and develop best practices, processes, and skillsets across their teams to ensure any forays into new AI-driven advertising tools are safeguarded against risk.
AI offers many promising benefits for advertisers, from cost efficiency to speed to ease of launch. However, these advantages come with some significant brand safety concerns. It’s important for advertisers to understand these threats, implement safeguards around their use of AI, and stay up to date on this quickly developing landscape in order to make the most of these tools and solutions without opening themselves up to consumer backlash and wasted spend.
Generative AI is one of the biggest drivers of brand safety concerns today, with 100% of industry professionals believing the technology poses a brand safety and misinformation risk to marketers and advertisers, and 88.7% calling the risk a moderate to significant one. Gen AI technology is not perfect, and these tools have regularly demonstrated a tendency to produce content that’s, at one end of the spectrum, low-quality and likely ineffective for advertising, and, on the other end of the spectrum, inaccurate or offensive.
Two particular areas of concern include generative AI’s tendency to make up false information (a flaw known as AI hallucinations) and indications of biases in AI-generated content (due to large language models relying on human inputs and human-generated content, which often contain biases).
These concerns have been on full display in recent years. In 2024, for example, Google had to suspend the image-generating capabilities of its Gemini chatbot, which is integrated into Google’s advertising tools, after it produced historically inaccurate images—specifically, images of “multi-ethnic Nazis and non-white U.S. Founding Fathers.” The controversy demonstrates how developers are still learning how to program these technologies to effectively avoid bias: Gemini was programmed to avoid racial and ethnic bias, which, ironically, backfired when the images in question ended up being inaccurate.
Of course, this doesn’t mean that advertisers should forgo the efficiencies offered by generative AI. However, it’s critical that teams understand the risks and put proper safeguards in place to minimize their likelihood.
“If teams are thoughtful in reviewing the outputs, then using AI to repurpose existing creative or develop elements of media assets should be fine,” says Molly Marshall, Client Strategy and Insights Partner at Basis. “But AI can’t currently replicate the creative process in terms of identifying a strong insight and developing creative that meaningfully relates to a target consumer, so AI-generated creative should complement and iterate upon an existing strategy, not wholly develop it.”
Generative AI has also prompted some headaches for brands that have started using AI-powered chatbots to streamline and personalize customer service on their websites. The technology promises to transform the customer service industry—however, upon testing chatbots offered by TurboTax and H&R Block, reports found that the chatbots offered inaccurate information at least half of the time.
“Chatbots offer brands a big opportunity to streamline communication with customers, especially as brick-and-mortar stores close and more customer service is going virtual,” says Marshall. “But the potential damage from chatbots that share inaccurate information may outweigh those benefits for some brands.”
Advertisers must also prepare for the growing presence of generative AI in online content. AI-generated material is becoming increasingly common—for example, the amount of AI content in the top 20 Google search results jumped from just 5.6% when ChatGPT was first released in 2022 to more than 19% in early 2025.
Generative AI has also made it easier for bad actors to create made-for-advertising sites (MFAs) filled with low-quality content, misinformation-filled pages strategically developed around key search terms, and other content that could pose significant risks to brands that run ads alongside it. This risk is amplified by the new administration’s lighter regulatory approach—particularly its executive order that “revokes certain existing AI policies and directives that act as barriers to American AI innovation.” Though this deregulatory stance may create space for more innovation, it may also make it easier for those with malicious intent to flood the internet with low-quality, AI-generated, mis- and disinformation-filled content. As a result, advertisers will need to be more deliberate around their ad spend and put new guardrails in place to avoid waste as well as risky (if not downright harmful) ad placements.
Programmatic advertisers, in particular, will need to seek out solutions that help steer their dollars away from MFA sites and other brand unsafe environments, as research has found that 15% of their budgets are spent on MFAs. “Advertisers must be able to react in real-time to block misleading sites and keywords,” says Marshall, and should embrace technological solutions like MFA block lists to help minimize the risk.
These concerns have been compounded by the recent trend of platforms rolling back their content moderation efforts. For instance, Meta recently removed its fact-checking program in lieu of a “Community Notes” approach that sources content moderation to users, as well as updated their Hateful Content guidelines, allowing users to share controversial and/or harmful content that was previously banned. This pullback of content moderation, coupled with the proliferation of AI-generated content that can be low-quality if not blatantly incorrect or harmful, makes it critical for brands and agencies to develop strong brand safety frameworks and to prioritize partnerships with premium, trusted publishers. Agencies and brands may also eventually need to develop teams focused on dealing with misinformation and disinformation to protect their spend.
Advertisers must also balance their own enthusiasm around AI with a consumer base that isn’t quite so excited. While nearly 77% of industry professionals believe that generative AI will have a positive impact on marketing and advertising, the majority of consumers don’t trust the technology: A 2024 report from the Edelman Trust Institute found that US consumer trust in artificial intelligence has fallen by 15% in the last five years, from 50% to 35%. And when it comes to the use of AI in advertising, nearly two-thirds of US adults say they are either somewhat or very uncomfortable with AI-generated ads.
These opinions don’t necessarily mean that advertisers should stop embracing the AI-led tools that work for them—especially considering that AI has effectively driven behind-the-scenes advertising features such as machine learning, algorithmic optimization, bid multipliers, and group budget optimization for some time now.
What it does mean is that leaders need to be cognizant of consumer sentiment toward AI, and to act accordingly. This could include informing consumers about how AI is used in a client or stakeholder’s marketing efforts, via a social media post or a dedicated page on their website. Brands may also opt to disclose when an ad or content is generated by AI, as adding disclosures can lead to a 47% increase in the appeal of those ads, a 73% increase in the trustworthiness of those ads, and a 96% jump in trust for the brands behind them.
Data privacy is also top of mind for consumers, with 71% of US consumers worrying that their digital activities put them at risk for security incidents. And, 81% of consumers who have heard of AI feel that companies will use the technology to collect and analyze their personal information in ways people aren’t comfortable with. Organizations can gain consumers’ trust by offering transparency around how they safeguard their customers’ data, and by prioritizing partnerships with privacy-focused organizations or gaining voluntary certifications like SOC 2 compliance that indicate a commitment to data security and ethical data practices.
Leaders who prioritize this type of transparency can develop stronger, more trust-based relationships with their consumer base—which may provide a key competitive edge in a competitive environment.
Finally, there are a variety of legal concerns advertising leaders must account for as they adopt new AI tools. Artificial intelligence has advanced more quickly than legislators can keep up with it, but there are a variety of regulations that have been introduced in the US and beyond that aim to mitigate the threats posed by AI. At the same time, advertisers must ensure compliance with existing legislation to avoid hefty fines and other legal consequences.
As advertisers grapple with widespread signal loss, AI has emerged as a powerful tool for enabling privacy-friendly personalized marketing.
AI can enable lookalike and predictive audiences based on first-party data, and generate a variety of data-based insights to help advertisers better understand their audience and their consumers’ path to purchase. Many advertisers are embracing these tools as a way to make up for the loss of cookies and other factors impacting signal loss.
At the same time, AI technologies can pose some data privacy-related risks. Many AI-powered advertising solutions use personal data to fuel their machine learning algorithms, and depending on the tool itself, there’s some ambiguity around where exactly all that data comes from, where it’s stored, and who can access it. What’s more, some artificial intelligence tools leverage the data they collect to deduce sensitive personal information such as location, health information, and political or religious views.
To ensure the ethical use of consumer data and to protect their businesses from legal consequences, advertising organizations must thoroughly vet any data-focused vendors or tools to ensure their data gathering, processing, analyzing, and storage systems comply with digital advertising regulations—and, of course, ensure their own data systems comply as well. Leaders must also stay on top of new AI- and data privacy-related regulations as they take hold, even if this is an area that might see less regulatory activity under the Trump Administration.
Another area of legal concern for advertisers has to do with the Federal Trade Commission (FTC), which is responsible for safeguarding US consumers from unfair or deceptive advertising practices.
One such practice relates to the use of dark patterns, or design techniques that can manipulate consumers into purchasing an item or service or providing personal data—and which can be created and enhanced with AI. “Identify[ing] and “crack[ing] down on businesses that deploy deceptive and unlawful dark patterns” has been a focus of the FTC’s for many years. On the state level, the Colorado Privacy Act and the California Privacy Rights Act (CPRA) have also outlined regulations around dark patterns in advertising.
Though the new chairman of the FTC appointed by President Trump, Andrew Ferguson, could very well take a lighter regulatory approach to AI than the prior chairwoman, Lina Kahn, advertisers should remain cautious. Even with the potential for a more lenient stance on AI oversight, the FTC’s core mission to protect consumers from misleading claims and/or harmful practices remains unchanged.
Lastly, advertisers must pay close attention to any ownership- and copyright-related legal concerns around AI-generated content.
In January 2025, the US Copyright Office released a report on the legal and policy issues related to AI and copyright. This report concludes that “the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements.” The key phrase here is "sufficient expressive elements," which suggests that merely pressing a button to create AI-generated content isn’t enough—there must be human involvement in curating, editing, or refining the work in a way that demonstrates original authorship. Without that kind of human involvement in the creation process, AI-generated content might not qualify for copyright protection.
At the same time, some ambiguity remains around what exactly constitutes “sufficient expressive elements,” and this will likely be determined on a case-by-case basis. As such, advertising teams must establish and adhere to strong creative processes with clear documentation of how AI is being used to develop assets—particularly those they might want to copyright. Advertising leaders should also stay on top of any further developments in this area to ensure compliance as more legislators and regulators refine rules around the ownership of AI-generated works. Enlisting a solid legal counsel or team will be key to navigating the complexity of this arena.
By investing the time in advancing their teams’ AI knowledge and skillsets now, leaders will set their organizations up for success as the technology becomes increasingly prevalent throughout digital advertising. The sooner advertisers learn how to implement and take advantage of these tools in a discerning and ethical way, the greater their competitive edge will be over those who procrastinate.
—
Want to learn more about how advertisers are approaching AI? We surveyed marketing and advertising professionals from top agencies, brands, non-profits, and publishers to better understand advertiser sentiments around the technology, as well as how they’re leveraging AI-driven tools in their work. Check out the top takeaways in our report, AI and the Future of Marketing.