AI is feeding off its own slop. Here’s what that means for thought leadership. Normally, I’m hammering on about the main reason you shouldn’t use AI for thought leadership content. That reason is this: Your “thoughts,” by definition, cannot “lead” if they are generated by an LLM that averages vast amounts of online data, flattening original ideas into generic mush.But that’s not what this post is about. Today, I wanted to focus specifically on data and how AI’s increasingly loose grip on the facts can cause reputational damage.When AI eats its own slop: The decline of reliable dataWe keep hearing that AI is getting better every day. Better. Smarter. Faster. But lately, AI seems to be getting worse at the very thing it was sold to us as best at: data, facts, and synthesizing information. It was supposed to bring more precision to how we understand information. Instead, it’s feeding off its own slop and generating fake stats that sow confusion. This obviously bodes poorly for its use in many circumstances. But I’ll dig deeper with an example from my own experience: thought leadership content, which I ghostwrite and coach founders to strengthen. When data drives the story — and AI drives it off a cliff I recently worked with a client who has a proprietary dataset and wanted to create content around the findings. The client’s data was easy to understand, both visually (with heatmaps and charts) and in spreadsheet format. The charts practically tell the story on their own if you just take the time to look at them. But instead of simply looking at them, a writer on the project decided to use AI to speed things up. Without reviewing the data directly, he asked the LLM to analyze it and present the key stats. (I’m being intentionally vague here, but suffice to say the writer was not a member of my team.) The AI misread the visuals, didn’t understand the data, and even hallucinated a few numbers of its own that weren’t in any of the data sets.The draft I was tasked with editing read well, but every piece of information was incorrect.I spent two painstaking hours realizing this as I checked every fact against the data. Most of that time wasn’t even editing. It was undoing: since the AI data didn’t reflect the actual data, the conclusions were incorrect as well. The editorial focus was off. I ultimately rewrote the entire piece. But do you know what part wasn’t hard work? After I determined that every stat and insight was inaccurate, I returned to the clear data sets the client had provided — the ones the writer never reviewed. It only took me 25 minutes to grab and understand the stats that mattered. Which begged the question: why get AI involved in the first place?AI didn’t save time that day. It actually cost the project time, and the writer’s unchecked use of AI shifted all that work onto me. (And he exposed the client’s proprietary data to an LLM, risking IP theft, but that’s another topic for another day.)AI’s self-referential data spiral — and why it matters for thought leadershipSource: GraphiteHere’s why I think we’re seeing less and less accuracy from AI models lately — especially when it comes to data and statistics:These systems are trained on the internet. But an increasing portion of the internet is now AI-generated — roughly half, according to Graphite. So AI is training on its own slop. Each round makes the results a little murkier as the AIs copy their own handwriting over and over, until they begin to forget what the letters even mean.And that matters for thought leadership.As powerful as thought leadership content can be when it comes from an expert with a paradigm-shifting approach that’s backed by data, it can have the opposite effect when it spreads misinformation. In an Edelman and LinkedIn survey, 25% of decision-makers said reading poor-quality thought leadership directly led them to decide AGAINST working with an organization. The case for acknowledging AI’s limitsI’m not saying we should toss AI out. I’m saying we need to be realistic about its current capabilities. Especially when it comes to thought leadership and other business content, where the reputational risks of a lack of caution are sky high. Writing is thinking. The human brain isn’t optional in that process — at least not currently. Today, only a human who is already well-versed in the sphere can truly fact-check it. It’s the only thing keeping the work honest.A version of this blog was originally published on Hannah’s LinkedIn. Sign up for our newsletter to receive content marketing best practices.Please enable JavaScript in your browser to complete this form.Email * Email I agree to receive your newsletters and accept the data privacy statement.You may unsubscribe at any time using the link in our newsletter.Submit