How to Detect AI-Generated Images
A complete visual guide with annotated real examples — so you can see exactly where AI image generators go wrong.
In 2026, AI-generated images are everywhere. Models like GPT Image 1.5, Flux Pro, and Midjourney v6 produce photos that fool most people on first glance — and are increasingly being shared without disclosure on social media, in news articles, and in advertising. The ability to detect AI-generated images has become a basic media literacy skill.
The good news: even the best AI models leave consistent, learnable traces. This guide shows you what to look for, with numbered annotations on real AI images so you can see the exact pixels that give them away.
Why AI Images Are So Hard to Spot in 2026
The jump from 2022-era Stable Diffusion to today's models is enormous. Early AI images had obvious problems: distorted faces, six-fingered hands, melting backgrounds. Modern models have solved most of the surface-level problems. A single portrait generated by GPT Image 1.5 or Midjourney v6 can be indistinguishable from a professional photograph to an untrained eye.
The reason they've improved so fast is scale: these models are trained on billions of images scraped from the internet. They've learned what "a realistic photo" looks like in a statistical sense. But they've learned it as a distribution of pixels — not as an understanding of physics, anatomy, or how the real world works. That gap between statistical plausibility and physical reality is where the tells live.
AI models are also inconsistent under scrutiny. A generated image may look perfectly realistic when you glance at the subject, but fall apart completely when you examine the background, or the hands, or the text on a sign. Real photographs are coherent at every scale — the closer you look, the more real detail you find. AI images reveal their nature when you start asking questions the model never considered.
Annotated AI Image Examples
Below are 3 AI-generated images with numbered circles marking every detectable flaw. Each number corresponds to an explanation on the right. The real photo is shown as a small thumbnail for direct comparison.
Hamburger
Food photography is a popular AI prompt category, and modern models produce mouth-watering results at first glance. But food has strict assembly logic — ingredients stack in a specific order, for physical and culinary reasons. AI ignores all of that.
What gives it away
Look at what's sticking out from the top of the burger — there appear to be two separate heads of lettuce, one above the other. In a real burger there is one layer of lettuce, placed directly under the bun lid. AI models generate ingredients independently and then stack them without understanding that doubling up a leaf vegetable is structurally senseless.
The tomato slice sits directly on the bread with nothing between them. In any real burger the tomato rests on the patty or at least on a layer of sauce — never on dry bun. The bread would immediately become soggy. AI has no model of moisture physics or culinary convention, so it places ingredients wherever they fit visually.
The overall stacking sequence doesn't follow any real burger assembly logic. Real burgers are built in a specific order — bun, sauce, lettuce, tomato, patty, cheese, pickles, top bun — so that structural ingredients support softer ones and flavours combine correctly. AI generates a visually "burger-shaped" pile without understanding the underlying order.
The sesame seeds on the bun are too uniform in size and spacing, and the bun surface lacks the slight gloss and irregular browning of a baked roll. AI-generated baked goods tend to have seeds that look printed on rather than baked in, and a crust texture that is too smooth and even.
Man playing guitar on the street
Musical instruments have precise, well-documented geometry. A guitar neck, headstock, and body shape must conform to real designs. AI learns the gestalt of "guitar" from training images but cannot reconstruct the exact proportions — the instrument always looks slightly wrong.
What gives it away
The body of the guitar has an incorrect silhouette — the waist contour, lower bout, and overall proportions don't match any real guitar design. AI models reconstruct instruments from training distribution rather than from a precise geometric template, so the shape looks guitar-like but not actually correct.
The end of the guitar neck — the headstock — is malformed. Real headstocks have a precise arrangement of tuning pegs in a consistent pattern (3+3 or 6-in-line). In this image the headstock either merges with the background, has the wrong shape, or the tuning machines are incorrectly placed or missing.
The building behind the musician has architectural inconsistencies — window grids that don't align to a structural grid, walls with perspective that doesn't converge to a consistent vanishing point, and surface details that look like a texture rather than real masonry or cladding.
Cars on the highway
Vehicles are one of the hardest subjects for AI to fake convincingly. Real cars have precise make-specific silhouettes, and every registered car carries a readable license plate. AI has no knowledge of either.
What gives it away
None of the vehicles are identifiable as a real make or model. Real highway photos show cars with distinct silhouettes — a Golf looks different from a 3-Series. AI generates generic "car-shaped" forms that blend design language from many manufacturers into something that belongs to none of them.
License plates are illegible — the characters are blurred, invented, or form no recognisable national format. Real plates have consistent fonts, spacing, and country-specific formatting that diffusion models cannot reproduce faithfully. This is one of the most reliable tells in any vehicle image.
Lane markings on real motorways follow strict legal standards — consistent dash lengths, fixed intervals, precise alignment. AI-generated road surfaces often have markings that curve, fade inconsistently, or vanish mid-lane without reason.
The 6 Most Reliable AI Tells
These patterns appear across all major AI image generators — from Stable Diffusion to GPT Image 1.5. Some are more visible in older models, but all remain detectable even in state-of-the-art 2026 outputs.
Faces: too perfect, or subtly cloned
AI models converge on an idealised "default face" when generating people. A single portrait can look flawless — but in group scenes, everyone shares the same bone structure, hair texture, and expression. Individual faces are also subtly wrong at close range: pupils may be asymmetric, ear cartilage simplified, and the specular highlight in the eye placed inconsistently with the light source. Look for: identical-looking people in groups; eyes that look slightly painted; ears that lack inner structure.
Zoom into the eyes. Real eyes have complex corneal reflections that mirror the actual light source.
Hands and fingers: the classic tell
Hand anatomy has been AI's most notorious weakness since the first diffusion models. Modern models have improved dramatically — FLUX and GPT Image 1.5 produce convincing hands in straightforward poses — but they still fail under pressure: overlapping fingers, partially hidden hands, or complex grips all risk producing extra digits, fused joints, or fingers bending the wrong direction. AI has no skeletal model of the hand; it reconstructs it from pixel patterns, which means edge cases always break down.
Count fingers. Then check each knuckle angle. A finger bending toward the camera should show foreshortening; AI often gets this wrong.
Text: always fake, even if it looks real
AI image generators cannot read or write. They model text as a visual texture — shapes that look like letters — without any underlying character model. The result is that license plates, street signs, product labels, book spines, and newspaper headlines almost always contain pseudo-text: characters that look plausible at thumbnail size but are illegible up close. Some modern models (especially GPT Image 1.5) can reproduce simple short words, but complex multi-word text, numbers in specific formats, or country-specific plate formats remain unreliable.
Try to read every piece of text in the image. If any of it is gibberish, the image is likely AI-generated.
Reflections: generated independently from the scene
Reflections are physically determined — they must mirror the exact geometry, colour, and brightness of what is above or beside them. AI generates scene content and reflective surfaces as separate elements of a composition. The result is that reflections in water, car paint, glass windows, and eyes rarely correspond to what should actually be reflected. Water in AI images often looks like a patterned texture with decorative highlights; car bodies show generic bright smears where you'd expect to see the sky and surrounding environment.
Look at water reflections: they should show a mirrored version of the sky and objects directly above. If they show something different — or nothing at all — the image is AI.
Backgrounds: plausible at a glance, wrong in detail
AI models generate backgrounds as a supporting texture for the main subject, not as a coherent three-dimensional space. This means: buildings with windows that don't align to a structural grid; roads with lane markings that curve or disappear; foliage that looks like a surface texture rather than individual leaves. Perspective is often subtly wrong — multiple vanishing points that don't correspond to a real camera position. Real backgrounds reward scrutiny; AI backgrounds punish it.
Check building windows: do they form a regular grid? Check road markings: do they align with the road direction? If not, you're looking at AI.
Textures: too perfect, too uniform
Real materials are imperfect. Fabric has uneven weave and wear. Skin has pores, minor blemishes, and variation in pore size across different parts of the face. Stone has irregular joints and weathering. Fur and feathers have individual variation at the level of a single strand or barb. AI generates textures as smooth, tiling patterns — consistent in a way that no real surface could be. This is especially visible in close-up shots of animals (feathers, fur), food (pastry layers, bread crumb), and clothing (fabric weave).
Zoom into any textured surface. Real texture has fractal complexity — you find more detail the closer you look. AI texture becomes a smooth, repeated pattern.
Quick Checklist: What to Check in Any Suspicious Image
Run through this list whenever you encounter an image you're unsure about. The more boxes you tick, the more confident you can be — but a single strong tell is often enough.
How the Major AI Models Compare
Different models have different characteristic weaknesses. Knowing which model you're dealing with — if you can tell — helps narrow down where to look.
AI Image Detection Tools — Do They Work?
Several tools claim to automatically detect AI-generated images. They work by analyzing statistical artifacts left by the generation process — noise patterns, frequency distributions, and other signals invisible to the human eye. Here's an honest assessment:
On older models (SD 1.5, DALL-E 2), automated detectors work reasonably well — the generation artifacts are strong enough to be reliably detected. Accuracy can be 80–90% on these outputs.
On current models (GPT Image 1.5, Flux Pro, Midjourney v6), detection accuracy drops significantly — often to 60–70%, barely better than chance in some studies. The models have improved fast enough to outpace detector training data.
There's also a cat-and-mouse dynamic: as detectors improve, models are trained to avoid producing the patterns detectors look for. This arms race means no automated tool can be considered reliable without continuous updates.
C2PA content credentials (a new industry standard for labeling AI-generated content) are starting to appear in outputs from OpenAI, Adobe, and others. When present, they reliably indicate AI origin — but they can be stripped by saving or re-uploading the image.
Bottom line: Automated detection tools are a useful first pass, not a definitive answer. Visual inspection using the tells described in this guide is more reliable for high-quality fakes — and the only method that works across all models.
Test What You've Learned
Two images per round — one real, one AI-generated. Pick which is real, then see the exact tells revealed with annotations, just like the examples above. 5 rounds, covering people, food, animals, and more.
Start the Quiz →Free · No account needed · 5 rounds with full annotations
Frequently Asked Questions
What are the easiest ways to spot an AI-generated image? ▾
Start with text: try to read any license plates, signs, or labels in the image. If the text is garbled or nonsensical, it's almost certainly AI-generated. Then check hands if visible — extra or fused fingers are a reliable tell. For scenes with multiple people, look for clone-like facial uniformity. These three checks catch a large majority of AI images.
Can AI detectors automatically identify fake images? ▾
Automated AI detectors exist (Winston AI, Hive Moderation, Illuminarty) but their accuracy against state-of-the-art 2026 models is often only 60–70%. They are more reliable against older Stable Diffusion-era images. Use them as a first signal, not a definitive answer. Visual inspection using the tells described in this guide is more reliable for high-quality fakes.
Are newer AI models harder to detect than older ones? ▾
Yes — significantly. GPT Image 1.5 and Flux Pro produce images that are dramatically harder to detect than 2022-era DALL-E or Stable Diffusion. But the core weaknesses remain: even the best 2026 models fail on license plates, complex group anatomy, physically accurate reflections, and consistent background geometry.
What are the most reliable tells in 2026 that still work? ▾
Text rendering (license plates, signs) remains the single most reliable tell. Physically accurate reflections are a close second — water, glass, and car body reflections that match the actual scene are still beyond any current model. Complex crowd scenes and group shots with consistent anatomy are also strong tells. Fine textured surfaces (feathers, fur, pastry layers) remain characteristic under close inspection.
Does image compression hide AI detection artifacts? ▾
Yes — platforms like Twitter/X, Instagram, and WhatsApp aggressively compress images, which can destroy or obscure low-level frequency artifacts that automated detectors look for. High-quality JPEG or original-resolution images are always better for both visual and automated analysis.
Can AI-generated images fool forensic image analysis? ▾
Often yes. Tools like ELA (Error Level Analysis) and noise analysis were designed for detecting photo manipulation (cloning, splicing) rather than full image synthesis. They produce unreliable and often misleading results on AI-generated images. Metadata analysis (checking for EXIF data) can be useful — real cameras embed detailed metadata, while AI images typically have none or very minimal metadata.
What is C2PA and does it help identify AI images? ▾
C2PA (Coalition for Content Provenance and Authenticity) is a technical standard for embedding "content credentials" into images — a cryptographically signed record of how an image was created. OpenAI, Adobe, and other companies have begun embedding these credentials in AI-generated images. When present, they reliably indicate AI origin. However, they can be stripped simply by taking a screenshot or re-saving the image, so absence of C2PA credentials doesn't mean an image is real.
How can I get better at detecting AI images? ▾
The most effective method is deliberate practice with immediate feedback. The more examples you examine with explanations of what's wrong, the faster your eye learns to spot the patterns automatically. Start with clear categories (food, vehicles) where tells are most obvious, then move to harder categories (portraits, landscapes). Our quiz mode above shows the exact annotation markers on each AI image after your guess.
Practice With More Categories
Each category trains a different set of detection skills — variety is key.