Notice It

Train your eye to spot AI-generated content. Once you see the patterns, you can't unsee them.

Spotting AI images

AI image generation has come a long way, but it still leaves fingerprints. The trick is knowing where to look.

Start with your gut. If something feels "off" but you can't immediately say why, that instinct is usually right. AI images trigger this uncanny valley response where everything looks almost correct, but not quite. Trust that feeling.

Eyes are often the first giveaway. They might look lifeless, lacking that natural depth and wetness you see in real photos. The pupils could be asymmetric, or the reflections inside them might show something that makes no physical sense. Smiles can feel pasted on, not reaching the eyes the way genuine expressions do. And sometimes there's just... nothing there. An empty stare where emotion should be.

Then there's the "too perfect" problem. Skin that's unnaturally smooth, with no pores or texture. Lighting that has no source, or seems to come from everywhere at once. Colors that feel dreamlike and oversaturated. No grain or noise even in what should be a low-light shot. Backgrounds that are suspiciously uniform. Real photos have imperfections. AI often forgets to add them.

You might also notice repetitive patterns if you look at enough AI content. The same poses keep showing up. Body proportions feel weirdly standardized. There's this recurring "AI face" that you start recognizing. If you get a "I've seen this exact thing before" feeling, you probably have, just with different hair or clothes.

Fine details are where AI really struggles. Hands are the classic example: extra fingers, missing fingers, fingers that bend the wrong way or grip things impossibly. But it goes beyond that. Jewelry can fuse into skin or hang in ways that defy physics. Clothing has seams that lead nowhere, buttons in random places, fabric that doesn't fold right. Hair tends to melt into backgrounds or turn into texture soup at the edges.

And then there's text. Any text in an AI image is almost always a dead giveaway. Signs with unreadable gibberish. Shirts with letters that almost form words but don't. Fake watermarks and signatures that look like someone had a stroke while writing. Background objects that you can't identify because they don't actually exist. Architecture that makes no structural sense: doors to nowhere, windows at wrong angles, stairs that would be impossible to walk on.

Spotting AI videos

Video generation is newer tech, and honestly, it's often easier to spot than images. The flaws really show themselves once things start moving.

The first thing you'll notice is that dreamlike quality. Everything feels slightly surreal, like you're watching through a filter that doesn't exist. The exposure stays weirdly consistent when it shouldn't. There's this soft, floaty atmosphere even in scenes that should feel grounded. Colors might shift unnaturally between frames.

Motion is where it really falls apart. Human movement has these tiny micro-jitters and imperfections, but AI motion is often too smooth, too clean. Gestures feel robotic, lacking that natural acceleration and deceleration we do without thinking. Faces can subtly morph between frames if you pay attention. Hair and clothing move wrong, sometimes clipping straight through the body. Limbs might stretch or bend in ways that would send you to the hospital in real life.

Watch for scene inconsistencies too. Objects appearing or disappearing between cuts. Backgrounds that warp when the camera moves. Shadows that don't match where the light should be coming from. Reflections showing something different than what's in the scene. Physics violations with water, fire, or smoke behaving in ways they never would in reality.

Audio tells are useful if the video has sound. AI voices have this robotic cadence with unnatural pauses. Lip sync is often slightly off, with mouth movements not quite matching what you hear. Ambient sounds that should exist in the scene are missing. Background noise stays perfectly uniform instead of shifting naturally.

Spotting AI writing

Text has its own fingerprints. Once you learn them, you'll start noticing AI writing everywhere: articles, comments, product descriptions, social media posts.

Large language models have favorite words they massively overuse. "Delve" is the famous one, but there's a whole family: crucial, pivotal, vital, tapestry, landscape, interplay, foster, enhance, showcase. Transitions like "Additionally," "Furthermore," and "Moreover" showing up at the start of sentences. Adjectives like "intricate," "vibrant," and "enduring" that sound sophisticated but say almost nothing. When you see several of these in one piece, that's a red flag.

AI text loves telling you how important everything is. It can't help itself. "Stands as a testament to..." is a classic phrase. Everything "plays a vital role" or "reflects broader trends." Even mundane topics get described as having an "enduring impact" or leaving an "indelible mark on history." This puffery is constant.

There's also this pattern of tacking on analysis-sounding phrases at the end of sentences, usually with -ing words. "...ensuring continued growth." "...highlighting its significance." "...contributing to the broader ecosystem." These sound thoughtful but add nothing meaningful.

The structure of AI writing is predictable. There's this "rule of three" overuse: "adjective, adjective, and adjective" or "phrase, phrase, and phrase." Negative parallelisms like "It's not just about X, it's about Y" show up constantly. Conclusions follow the "Despite challenges... continues to thrive" formula. Em dashes get scattered everywhere. And there's this thing called "elegant variation" where the AI refuses to repeat words, so "the city" becomes "the urban center," then "the metropolitan area," then "the municipality" all in one paragraph.

Watch out for vague attribution. "Experts argue..." but which experts? "Industry reports indicate..." but what reports? "Observers have noted..." but who, when, where? "Has been described as..." by whom? Often by the AI itself. This weasel wording makes claims sound authoritative without actually saying anything verifiable.

The promotional tone is another giveaway. AI writing often reads like marketing copy even when it shouldn't. "Nestled in the heart of..." is travel brochure language. Places apparently "boast" things constantly. Everything "continues to captivate audiences." Words like "groundbreaking" and "revolutionary" get thrown at completely ordinary subjects.

Structural patterns are telling too. AI loves Title Case For Every Heading. It uses excessive boldface like a textbook. Lists follow the "**Point:** Description" format everywhere. Articles end with "Challenges and Future Outlook" sections. Conclusions start with "In summary" or "In conclusion."

Sometimes AI leaves actual glitches. Markdown syntax bleeding through: asterisks for bold, hash symbols for headings. Curly quotes ("these") instead of straight ones ("these"), which is ChatGPT's signature. Placeholder text that was never filled in. Strange reference bugs like "citeturn0search0" or "[oai_citation:0]". Collaborative phrases like "I hope this helps!" or "Let me know if you'd like..." that were clearly meant for a chat, not published content. Knowledge cutoff disclaimers: "As of my last update..." or "While specific details are limited..."

And just like with images, there's an uncanny valley for text. It feels "off" before you can articulate why. Too smooth, no personality, no voice. Verbose but empty, using many words to say little. Universally positive, rarely taking a real stance or expressing genuine criticism. Grammatically perfect but somehow hollow. AI regresses to the mean, saying what's statistically most probable rather than what's most true or interesting.

The bottom line

If you have questions, want clarification, or want to dig deeper, let us know. We can expand on specific points, share examples, or help with related topics.