It can produce music, create digital art, and even compose text in the style of specific writers. But since its public debut back in late 2022, generative AI has been met with skepticism—entirely reasonable skepticism, we might add—both from within and outside of the digital advertising world.
Given the mixed feelings around this new technology, we wanted to better understand how marketers and advertising teams are thinking about its role in the future of digital advertising. In surveying more than 200 marketing and advertising professionals—spanning agencies, non-profits, and publishers—we found that 86.6% of them believe AI will radically transform the industry in the next 3 to 5 years. At the same time, 28% of teams aren’t using this new technology at all.
This is a notable, though not unsurprising, disconnect. With such ambivalence towards this new tech, it makes sense that its use is varied. But, given widespread belief that it will radically transform the advertising landscape, it’s a tool that can’t be ignored.
Today, we’re digging into everything an AI skeptic should know: What the risks are, how to address them, and how your team can embrace (or, at minimum, dip their toes into) its possibilities. Ready? Let’s dive in.
From questions around its regulation, to concerns over how it might spread mis- or disinformation, to the threats it poses to content authenticity, there are plenty of reasons to be wary of generative AI. And in order to experiment with (and eventually leverage) this new tech effectively, it’s important to understand the specifics of its liabilities:
First, the US does not yet have any specific legislation to articulate AI regulation, though it’s something that has been a hot topic in the past several months. With both the White House and Congress showing their support for regulation (not to mention Open AI CEO Sam Altman’s direct plea to lawmakers that AI be regulated), it’s likely we’ll see developments on this front in the near future. For advertisers, this lack of certainty makes using AI more complicated. Without knowing how—or how much—it will be regulated, it’s important that teams not put too many eggs in the proverbial AI basket.
Another risk of AI is the authenticity, quality, and validity of the content it produces. For instance, its role in spreading mis- and/or disinformation is of particular concern. This is recognized by misinformation experts and marketing professionals alike, with a Basis survey revealing that 99.5% of advertising professionals agree that generative AI poses a brand safety and misinformation risk for digital marketers. Couple this with the fact that more than half of advertisers believe consumers will find a brand less authentic if it uses AI-generated content in its marketing or advertising efforts, and it’s clear there is substantial concern over the quality of the content that can be produced by AI.
Despite these risks, there are ways to leverage this new tech in a way that combats these concerns head-on.
As we mentioned earlier, the best way to address the lack of tangible AI regulation is to take it slowly. Find ways to experiment with generative AI tools and use them effectively (more on that shortly!), while maintaining and leveraging your current set of tools and systems. Then, as details around AI regulation become more concrete, you can adapt your practices to ensure you’re compliant with these standards.
When it comes to combatting mis- and dis-information, it’s important to recognize where the content in a generative AI chatbot or image creator is coming from, and to do due diligence to ensure that information is accurate. For instance, if you ask generative AI for statistics or research, be sure to look up the original sources to confirm the validity of information. And, if you encounter anything that appears to be mis- or disinformation, be sure to flag it appropriately within whatever system you’re using.
Finally, to ensure the content you’re producing is authentic to your brand and brand voice, recognize the limits of AI. Though generative AI’s capabilities are impressive, the content produced lacks the nuance, expertise, and authenticity that’s inherent to human-created content. As such, we recommend against simply copying and pasting AI-generated content and slapping your brand’s name on it. Anything that’s been written or produced by AI should be carefully examined and edited by a human, to ensure it aligns with your brand’s distinct voice. Even better? Leverage AI for preliminary or supplementary materials, rather than using it to generate your most important content.
To make the most of the AI opportunity, it’s important to recognize the technology’s strengths as well as its weaknesses. Now that we’ve examined the risks of AI and how to address them, let’s explore how generative AI can be used effectivelyfor marketing and advertising efforts. You might consider experimenting with generative AI in service of:
If you’re in that 28% of teams not currently using AI, consider experimenting and playing with one of these uses—and don’t be afraid to pivot if it isn’t the right fit for you!
It’s clear that generative AI offers many benefits to digital advertisers. And, with tech leaders and advertising industries agreeing on the impact it will bring to the industry, it’s a force that can’t be ignored. By approaching this technology with a healthy balance of wariness and wonder, advertisers can begin to leverage this tech in a way that ensures a positive net benefit for their teams.
Hungry for even more insights on how digital advertising professionals feel about generative AI? Check out our new report, Generative AI and the Future of Marketing. In it, we share the results of our survey of over 200 marketing and advertising professionals, dig into the research on how AI fits in the digital advertising landscape, and analyze the potential impacts on the future of marketing.