logo - what is real quiz Which one is real?

How to Detect AI-Generated Images

A complete visual guide with annotated real examples — so you can see exactly where AI image generators go wrong.

In 2026, AI-generated images are everywhere. Models like GPT Image 1.5, Flux Pro, and Midjourney v6 produce photos that fool most people on first glance — and are increasingly being shared without disclosure on social media, in news articles, and in advertising. The ability to detect AI-generated images has become a basic media literacy skill.

The good news: even the best AI models leave consistent, learnable traces. This guide shows you what to look for, with numbered annotations on real AI images so you can see the exact pixels that give them away.

Why AI Images Are So Hard to Spot in 2026

The jump from 2022-era Stable Diffusion to today's models is enormous. Early AI images had obvious problems: distorted faces, six-fingered hands, melting backgrounds. Modern models have solved most of the surface-level problems. A single portrait generated by GPT Image 1.5 or Midjourney v6 can be indistinguishable from a professional photograph to an untrained eye.

The reason they've improved so fast is scale: these models are trained on billions of images scraped from the internet. They've learned what "a realistic photo" looks like in a statistical sense. But they've learned it as a distribution of pixels — not as an understanding of physics, anatomy, or how the real world works. That gap between statistical plausibility and physical reality is where the tells live.

AI models are also inconsistent under scrutiny. A generated image may look perfectly realistic when you glance at the subject, but fall apart completely when you examine the background, or the hands, or the text on a sign. Real photographs are coherent at every scale — the closer you look, the more real detail you find. AI images reveal their nature when you start asking questions the model never considered.

Each year
Harder to detect
Top models improve faster than detection tools
~90%
Still fail at text
Of AI models cannot reproduce readable license plates
6 key areas
Consistent tells
Where AI fails regardless of the model used

Annotated AI Image Examples

Below are 3 AI-generated images with numbered circles marking every detectable flaw. Each number corresponds to an explanation on the right. The real photo is shown as a small thumbnail for direct comparison.

Food

Hamburger

Food photography is a popular AI prompt category, and modern models produce mouth-watering results at first glance. But food has strict assembly logic — ingredients stack in a specific order, for physical and culinary reasons. AI ignores all of that.

AI-generated image Generated by: Nano Banana Pro (Google)
AI-generated fake image: Hamburger
1
2
3
4
Real photo: Hamburger
Real photo — for comparison
Notice the natural variation, imperfect details, and physical coherence that AI cannot replicate.

What gives it away

1
Double lettuce

Look at what's sticking out from the top of the burger — there appear to be two separate heads of lettuce, one above the other. In a real burger there is one layer of lettuce, placed directly under the bun lid. AI models generate ingredients independently and then stack them without understanding that doubling up a leaf vegetable is structurally senseless.

2
Tomato on bare bun

The tomato slice sits directly on the bread with nothing between them. In any real burger the tomato rests on the patty or at least on a layer of sauce — never on dry bun. The bread would immediately become soggy. AI has no model of moisture physics or culinary convention, so it places ingredients wherever they fit visually.

3
Layer order

The overall stacking sequence doesn't follow any real burger assembly logic. Real burgers are built in a specific order — bun, sauce, lettuce, tomato, patty, cheese, pickles, top bun — so that structural ingredients support softer ones and flavours combine correctly. AI generates a visually "burger-shaped" pile without understanding the underlying order.

4
Sesame bun texture

The sesame seeds on the bun are too uniform in size and spacing, and the bun surface lacks the slight gloss and irregular browning of a baked roll. AI-generated baked goods tend to have seeds that look printed on rather than baked in, and a crust texture that is too smooth and even.

People

Man playing guitar on the street

Musical instruments have precise, well-documented geometry. A guitar neck, headstock, and body shape must conform to real designs. AI learns the gestalt of "guitar" from training images but cannot reconstruct the exact proportions — the instrument always looks slightly wrong.

AI-generated image Generated by: Flux 2 LoRA Gallery Realism
AI-generated fake image: Man playing guitar on the street
1
2
3
Real photo: Man playing guitar on the street
Real photo — for comparison
Notice the natural variation, imperfect details, and physical coherence that AI cannot replicate.

What gives it away

1
Guitar body shape

The body of the guitar has an incorrect silhouette — the waist contour, lower bout, and overall proportions don't match any real guitar design. AI models reconstruct instruments from training distribution rather than from a precise geometric template, so the shape looks guitar-like but not actually correct.

2
Headstock

The end of the guitar neck — the headstock — is malformed. Real headstocks have a precise arrangement of tuning pegs in a consistent pattern (3+3 or 6-in-line). In this image the headstock either merges with the background, has the wrong shape, or the tuning machines are incorrectly placed or missing.

3
Background building

The building behind the musician has architectural inconsistencies — window grids that don't align to a structural grid, walls with perspective that doesn't converge to a consistent vanishing point, and surface details that look like a texture rather than real masonry or cladding.

Cars & Vehicles

Cars on the highway

Vehicles are one of the hardest subjects for AI to fake convincingly. Real cars have precise make-specific silhouettes, and every registered car carries a readable license plate. AI has no knowledge of either.

AI-generated image Generated by: Juggernaut Flux Pro
AI-generated fake image: Cars on the highway
1
2
3
Real photo: Cars on the highway
Real photo — for comparison
Notice the natural variation, imperfect details, and physical coherence that AI cannot replicate.

What gives it away

1
Unrecognisable cars

None of the vehicles are identifiable as a real make or model. Real highway photos show cars with distinct silhouettes — a Golf looks different from a 3-Series. AI generates generic "car-shaped" forms that blend design language from many manufacturers into something that belongs to none of them.

2
License plate

License plates are illegible — the characters are blurred, invented, or form no recognisable national format. Real plates have consistent fonts, spacing, and country-specific formatting that diffusion models cannot reproduce faithfully. This is one of the most reliable tells in any vehicle image.

3
Road markings

Lane markings on real motorways follow strict legal standards — consistent dash lengths, fixed intervals, precise alignment. AI-generated road surfaces often have markings that curve, fade inconsistently, or vanish mid-lane without reason.

The 6 Most Reliable AI Tells

These patterns appear across all major AI image generators — from Stable Diffusion to GPT Image 1.5. Some are more visible in older models, but all remain detectable even in state-of-the-art 2026 outputs.

01 👁️

Faces: too perfect, or subtly cloned

AI models converge on an idealised "default face" when generating people. A single portrait can look flawless — but in group scenes, everyone shares the same bone structure, hair texture, and expression. Individual faces are also subtly wrong at close range: pupils may be asymmetric, ear cartilage simplified, and the specular highlight in the eye placed inconsistently with the light source. Look for: identical-looking people in groups; eyes that look slightly painted; ears that lack inner structure.

Tip

Zoom into the eyes. Real eyes have complex corneal reflections that mirror the actual light source.

02

Hands and fingers: the classic tell

Hand anatomy has been AI's most notorious weakness since the first diffusion models. Modern models have improved dramatically — FLUX and GPT Image 1.5 produce convincing hands in straightforward poses — but they still fail under pressure: overlapping fingers, partially hidden hands, or complex grips all risk producing extra digits, fused joints, or fingers bending the wrong direction. AI has no skeletal model of the hand; it reconstructs it from pixel patterns, which means edge cases always break down.

Tip

Count fingers. Then check each knuckle angle. A finger bending toward the camera should show foreshortening; AI often gets this wrong.

03 🔤

Text: always fake, even if it looks real

AI image generators cannot read or write. They model text as a visual texture — shapes that look like letters — without any underlying character model. The result is that license plates, street signs, product labels, book spines, and newspaper headlines almost always contain pseudo-text: characters that look plausible at thumbnail size but are illegible up close. Some modern models (especially GPT Image 1.5) can reproduce simple short words, but complex multi-word text, numbers in specific formats, or country-specific plate formats remain unreliable.

Tip

Try to read every piece of text in the image. If any of it is gibberish, the image is likely AI-generated.

04 🪞

Reflections: generated independently from the scene

Reflections are physically determined — they must mirror the exact geometry, colour, and brightness of what is above or beside them. AI generates scene content and reflective surfaces as separate elements of a composition. The result is that reflections in water, car paint, glass windows, and eyes rarely correspond to what should actually be reflected. Water in AI images often looks like a patterned texture with decorative highlights; car bodies show generic bright smears where you'd expect to see the sky and surrounding environment.

Tip

Look at water reflections: they should show a mirrored version of the sky and objects directly above. If they show something different — or nothing at all — the image is AI.

05 🏗️

Backgrounds: plausible at a glance, wrong in detail

AI models generate backgrounds as a supporting texture for the main subject, not as a coherent three-dimensional space. This means: buildings with windows that don't align to a structural grid; roads with lane markings that curve or disappear; foliage that looks like a surface texture rather than individual leaves. Perspective is often subtly wrong — multiple vanishing points that don't correspond to a real camera position. Real backgrounds reward scrutiny; AI backgrounds punish it.

Tip

Check building windows: do they form a regular grid? Check road markings: do they align with the road direction? If not, you're looking at AI.

06 🎨

Textures: too perfect, too uniform

Real materials are imperfect. Fabric has uneven weave and wear. Skin has pores, minor blemishes, and variation in pore size across different parts of the face. Stone has irregular joints and weathering. Fur and feathers have individual variation at the level of a single strand or barb. AI generates textures as smooth, tiling patterns — consistent in a way that no real surface could be. This is especially visible in close-up shots of animals (feathers, fur), food (pastry layers, bread crumb), and clothing (fabric weave).

Tip

Zoom into any textured surface. Real texture has fractal complexity — you find more detail the closer you look. AI texture becomes a smooth, repeated pattern.

Quick Checklist: What to Check in Any Suspicious Image

Run through this list whenever you encounter an image you're unsure about. The more boxes you tick, the more confident you can be — but a single strong tell is often enough.

Zoom into faces
Check eyes, ears, teeth for asymmetry or cloning
Count the fingers
Then check each knuckle for correct anatomy
Read all visible text
Plates, signs, labels — does it make sense?
Check water and glass
Do reflections match what's above them?
Examine the background
Windows, roads, foliage — are they coherent?
Verify shadow direction
All shadows should come from the same light source
In crowds, look for clones
Identical faces or body proportions across people
Zoom into textures
Fabric, fur, feathers, pastry — does it tile?
Reverse image search
Real photos usually have a traceable source
Check metadata
Real cameras embed EXIF data; AI images often don't

How the Major AI Models Compare

Different models have different characteristic weaknesses. Knowing which model you're dealing with — if you can tell — helps narrow down where to look.

GPT Image 1.5 (OpenAI)
2025 Very hard to detect
Strength: Exceptional photorealism in all categories; very strong text rendering
Weakness: Complex group anatomy; physical interactions between objects; very fine structural detail (watch faces, insect wings)
Flux Pro / Flux 2 (Black Forest Labs)
2024–2025 Hard to detect
Strength: Studio-quality portraits; excellent skin tones and lighting
Weakness: Musical instruments; complex manufactured objects; woven textures; background architecture
Midjourney v6
2024 Hard to detect
Strength: Cinematic composition; artistic lighting; consistent style
Weakness: Hands in complex poses; license plates; fine text at distance; realistic backgrounds behind clear subjects
Juggernaut Flux Pro (Community fine-tune)
2024 Medium to detect
Strength: Portrait photography; skin detail; studio-quality close-ups
Weakness: Group scenes (people merge); vehicle silhouettes; water reflections; rigging and thin lines
Stable Diffusion (SDXL and earlier)
2022–2023 Easier to detect
Strength: Variety of styles; large community of fine-tunes
Weakness: Faces in groups; hands; coherent text; consistent lighting across a scene; background realism

AI Image Detection Tools — Do They Work?

Several tools claim to automatically detect AI-generated images. They work by analyzing statistical artifacts left by the generation process — noise patterns, frequency distributions, and other signals invisible to the human eye. Here's an honest assessment:

On older models (SD 1.5, DALL-E 2), automated detectors work reasonably well — the generation artifacts are strong enough to be reliably detected. Accuracy can be 80–90% on these outputs.

On current models (GPT Image 1.5, Flux Pro, Midjourney v6), detection accuracy drops significantly — often to 60–70%, barely better than chance in some studies. The models have improved fast enough to outpace detector training data.

There's also a cat-and-mouse dynamic: as detectors improve, models are trained to avoid producing the patterns detectors look for. This arms race means no automated tool can be considered reliable without continuous updates.

C2PA content credentials (a new industry standard for labeling AI-generated content) are starting to appear in outputs from OpenAI, Adobe, and others. When present, they reliably indicate AI origin — but they can be stripped by saving or re-uploading the image.

Bottom line: Automated detection tools are a useful first pass, not a definitive answer. Visual inspection using the tells described in this guide is more reliable for high-quality fakes — and the only method that works across all models.

🔍

Test What You've Learned

Two images per round — one real, one AI-generated. Pick which is real, then see the exact tells revealed with annotations, just like the examples above. 5 rounds, covering people, food, animals, and more.

Start the Quiz →

Free · No account needed · 5 rounds with full annotations

Frequently Asked Questions

What are the easiest ways to spot an AI-generated image?

Start with text: try to read any license plates, signs, or labels in the image. If the text is garbled or nonsensical, it's almost certainly AI-generated. Then check hands if visible — extra or fused fingers are a reliable tell. For scenes with multiple people, look for clone-like facial uniformity. These three checks catch a large majority of AI images.

Can AI detectors automatically identify fake images?

Automated AI detectors exist (Winston AI, Hive Moderation, Illuminarty) but their accuracy against state-of-the-art 2026 models is often only 60–70%. They are more reliable against older Stable Diffusion-era images. Use them as a first signal, not a definitive answer. Visual inspection using the tells described in this guide is more reliable for high-quality fakes.

Are newer AI models harder to detect than older ones?

Yes — significantly. GPT Image 1.5 and Flux Pro produce images that are dramatically harder to detect than 2022-era DALL-E or Stable Diffusion. But the core weaknesses remain: even the best 2026 models fail on license plates, complex group anatomy, physically accurate reflections, and consistent background geometry.

What are the most reliable tells in 2026 that still work?

Text rendering (license plates, signs) remains the single most reliable tell. Physically accurate reflections are a close second — water, glass, and car body reflections that match the actual scene are still beyond any current model. Complex crowd scenes and group shots with consistent anatomy are also strong tells. Fine textured surfaces (feathers, fur, pastry layers) remain characteristic under close inspection.

Does image compression hide AI detection artifacts?

Yes — platforms like Twitter/X, Instagram, and WhatsApp aggressively compress images, which can destroy or obscure low-level frequency artifacts that automated detectors look for. High-quality JPEG or original-resolution images are always better for both visual and automated analysis.

Can AI-generated images fool forensic image analysis?

Often yes. Tools like ELA (Error Level Analysis) and noise analysis were designed for detecting photo manipulation (cloning, splicing) rather than full image synthesis. They produce unreliable and often misleading results on AI-generated images. Metadata analysis (checking for EXIF data) can be useful — real cameras embed detailed metadata, while AI images typically have none or very minimal metadata.

What is C2PA and does it help identify AI images?

C2PA (Coalition for Content Provenance and Authenticity) is a technical standard for embedding "content credentials" into images — a cryptographically signed record of how an image was created. OpenAI, Adobe, and other companies have begun embedding these credentials in AI-generated images. When present, they reliably indicate AI origin. However, they can be stripped simply by taking a screenshot or re-saving the image, so absence of C2PA credentials doesn't mean an image is real.

How can I get better at detecting AI images?

The most effective method is deliberate practice with immediate feedback. The more examples you examine with explanations of what's wrong, the faster your eye learns to spot the patterns automatically. Start with clear categories (food, vehicles) where tells are most obvious, then move to harder categories (portraits, landscapes). Our quiz mode above shows the exact annotation markers on each AI image after your guess.

Practice With More Categories

Each category trains a different set of detection skills — variety is key.

Related