How to easily detect AI Generated content
FYI no AI was used to create this article, it was 100% generated by me
Generative AI is incredible, I don’t think many people would argue with that! Having access to your own digital assistant to help you write emails, generate ideas, write code (and much more!) is so powerful. For those people who heavily use AI, you do start to notice the imperfections too.
I’m not just talking about hallucinations, but also the times when Gen AI can be way too enthusiastic. Or when you are working on something and all of a sudden you reach a limit of a conversation or when for some reason AI just stops replying.
Despite those imperfections, it really is changing the way we work.
One significant change I’ve noticed is the rise of AI generated content, especially on platforms like Linkedin, medium.com and Substack. People may well be using AI to assist a small part of the creative process, but there’s too much content out there, that’s predominantly written by AI and not the author.
Here’s what I’ve found to be the most common telltale signs of articles written in AI:
Overuse of emojiis - sure an emoji here and there can work, but for some reason Claude, ChatGPT and the other LLMs seem to love emojiis in their writing.
— The double hyphen - I use a single hyphen usually with a numbered list (like here) but I’ve noticed the AI generated content loves to use double hyphens for punctuation. This is honestly one of the easiest signs to spot AI generated content. I’ve noticed a few ‘influencers’ on Linkedin posting regular articles and posts with double hyphens. I was wondering before how they had so much time to write , work and live their lives and I strongly believe this is the explanataion.
- Repeated use of the single hyphen - Some people have figured out that they need to get rid of the double hyphens, yet for some reason LLMs really want you to use a hyphen. If you spot an article with way more hyphens than usual, it could be because it was written using AI.
Overuse of transition phrases - whilst some people occasionally use phrases such as "furthermore," "moreover," “ additionally” or"in conclusion" AI tends to use them a lot more. If you see language like this in an article it should raise your suspicion as to who really wrote it.
Overuse of Hedging phrases - again like the transition phases some people do use these legitimately, but AI tends to over use them. Look out for overuse of phrases like these: "It's important to note that", "It's worth noting that", "Generally speaking," or "One might argue that"
Textbook content - AI tends to use very rigid and uniform structures when creating articles and content. What I mean by this is that the paragraphs will generally be of similar size, the sentence structures will be very consistent in terms of pattern; very much like a textbook would be. Most human writtern articles tend to flow more freely and often be less structured.
It really does amaze me that despite how early on we are in our Generative AI journey, that already there seems to be AI content everywhere. One huge area of concern for me though is around ethics, because it does beg the question of how much AI assistance with an article should be acceptable and when should you disclose it?
My opinion is that if you’re using AI to write for you, even if you’ve prompted or set preferences to write in your voice, then you should disclose it. For now though, I still think it’s easy to spot, but it won’t be long before it will become a lot harder.
Let me know what you think of these and if after reading this you’ve spotted some AI articles that you might have thought were written by a human before.

