Does AI in Thought Leadership Represent a Race to the Bottom?

It seems there’s a new AI solution lurking around every corner.

Even though we’ve all seen the wave of AI solutions flooding the market, most of us are still unclear on the implications AI will have on our organizations, nevermind our day-to-day jobs. 

 

I’m the first to admit that I don’t have the answers to all the mysteries of the universe. But when creating public-facing white papers, research, commentaries, and other forms of content, I definitely have some concerns over what we’re going to see in the future:

 

  1. Greater brand risk

 

For years, editors and fact checkers have done the unglamorous work of ensuring that the integrity of your published content meets or exceeds the expectations of your audience. Nowadays, far too many organizations are taking shortcuts and relying on pundits because that shifts the responsibility of truth telling and accuracy to someone else. It’s also cheaper to produce, which translates to better margins. Now AI offers additional shortcuts—but shifts the responsibility of accuracy back to your organization. So what happens when you find yourself in the hot seat for passing off AI-generated content as original thought leadership? Your brand takes a hit. 

 

Conversely, those brands that promote the fact that they don’t use AI in their thought leadership or client-facing content will enjoy a boost to their reputation. Meanwhile, those that over rely on AI for thought leadership will lose audiences in droves. After all, anyone can ask Chat GPT to produce an article on a topic and read it themselves. Why read yours? In a marketplace where intellectual capital is what separates one organization from another, taking shortcuts is never advisable.

 

  1. Full disclosure for AI-generated content

 

Companies will either prohibit the internal use of solutions such as Chat GPT for creating content or require disclosures on anything developed using AI. Your organization may decide that it makes good business sense to be completely transparent or Google’s algorithms may force the issue by punishing anything it deems to be AI generated. Bloomberg has already reported how YouTube is rolling out new rules for AI-generated content on its platform to stem the tide of misinformation.

 

  1. Scanning internal documents for AI-generated content

 

As an editor, one of my jobs is to worry. “Did we use the correct style guide?” “Is a specific article intended for a UK audience, and we have to change every ‘z’ to ‘s’?” “Did any instances of the word ‘public’ without the letter ‘L’ slip by?’ Likewise, what happens if an internal document is developed using AI and then the entire organization begins to rely on it as a source document? Also bear in mind that the editorial bar tends to be lower for internal documents than those that are public facing. Scanning existing internal documents may not sound like the most glamorous job. But it won’t be long before legal and compliance sound the alarm.

 

  1. Higher costs for content partners

 

Like most technology solutions, AI holds the allure of helping us to work smarter and faster. But it also exposes all of us to a greater degree of risk. For instance, what happens if you and your competitor generate the same (or eerily similar) content using Chat GPT? Now imagine sharing that content with your video team, PR partners, or copywriting team. They have no idea where that content came from. But they still have an obligation to provide you with original content. To ensure they live up to their end of the bargain, they will have to spend even more time and energy ensuring that the inputs are original and don’t violate any copyrights. This potentially translates to more work and higher costs.

 

  1. The rise of AI-detecting algorithms

 

We’ve heard of the rash of deep fake videos using the likenesses of politicians, celebrities, and even ordinary folks. Some of these videos show obvious signs of being fake. But that doesn’t mean that everyone will take the time to discern the difference—especially not before the damage has been done. In response, you can expect a rise in algorithms that detect AI content. This is great news for those of us concerned about fake news and misinformation. But this also means that legitimate content will get ensnared in the mix.

 

The race to the bottom

 

Have you ever wondered how YouTube or TikTok content creators make money? The short answer is volume. They produce as many videos as possible to get as many “likes” and “subscribes” as possible. You’ll notice that nowhere here did I mention “quality.” Creating “more” doesn’t mean creating “better.” Let’s not mistake “faster” for “finer.” In fact, I think one thing we can all be sure of is that “more” AI-generated content will lead to poorer quality content. I hate to sound like a grumpy editor, but we should all be careful of the potential race to the bottom.