
In recent months, I’ve noticed, as I am sure you have as well, an increasing number of well-written articles on security-related topics. Pieces that exude deep insight, confidence, and even a touch of literary polish. At first glance, it is encouraging to see so many voices entering the conversation in our field. After all, the exchange of ideas is what drives any profession forward.
But I would be lying if I said I was not concerned.
The rise of generative AI has made it easier than ever to produce impressive, articulate content. With the right prompt, even those with little real-world experience can publish thoughtful-sounding content that appears to reflect deep industry knowledge. Therein lies the problem....appearance.
This pattern is especially evident following high-profile incidents. After the United Healthcare murder and now again with the recent murder of a Minnesota lawmaker and her husband, the industry finds itself flooded with bold commentary, predictive breakdowns, and “expert” analysis. Much of it generated or heavily enhanced by AI. Some of it is well-meaning and some of it is opportunistic. However, only a portion of it is grounded in actual operational experience.

There is no doubt that AI is an incredibly powerful tool. I say that as someone who uses it collaboratively, strategically, and transparently. But even the best AI cannot replace the quiet wisdom earned through time on the ground. It does not know what it means to sense that something is off before anything happens. It cannot draw on muscle memory built over years of detail work, or anticipate human behavior based on cues too subtle for data sets. That kind of insight is earned, not programmed. And no algorithm will ever persuade the valet staff at a five-star hotel to let your team monopolize the entire arrival area for several days, the way a seasoned agent can.
AI, in many ways, is like makeup. At its best, it enhances, it sharpens focus, highlights what is real, adding polish to already solid foundations. However, when used to fabricate or mislead, to create a face that does not actually exist, it becomes dangerous. Misguided advice, even when well worded, can lead to real-world consequences.
This is not about gatekeeping or discouraging new voices. On the contrary, we should welcome growth and innovation in our industry. However, we must also be honest with ourselves: in the age of AI, it has become far easier to sound like an expert than it is to be one.
This is simply a note of caution. We need to remain vigilant. Not just in protecting our principals, but also protecting the integrity of our profession. AI can amplify knowledge, but it should never fabricate it.
Eloquence does not equal expertise.






