Social media users have become better at recognizing telltale signs of AI doctoring in celebrity photos or messy city scenarios. But in the US-Israel war with Iran, a new type of deception has come into the spotlight: fake satellite imagery.
“For satellite images, we can safely say that most people have very limited familiarity,” Simon Papadopoulos, an AI researcher specializing in media verification at the Greek research institute CERTH, told DW. “This makes them particularly vulnerable to misuse, because if you change one small detail in a satellite image, it will most likely go unnoticed.”
Manipulating satellite imagery is nothing new – Russia infamously faked satellite imagery of a downed Malaysian airliner in 2014. Similar fake images have surfaced in other regional conflicts, including India-Pakistan tensions last year. But experts say this technology has become much more widespread during the current US-Israel conflict with Iran.
“The problem seems to be getting worse,” said open-source intelligence (OSINT) analyst Brady Affric.
One reason: AI tools now make it easy to pull a real satellite image from Google Earth or Bing Maps and apply effects to it. Manipulated images are often deployed to promote military narratives advantageous to one side, by suggesting destroyed infrastructure or strategic damage.
So many things are adding to the problem Public access to high-resolution imagery from commercial satellite providers is limited To prevent your data from being used for military targeting during war. But that information gap creates a void – one that is increasingly filled with fabricated images that take advantage of the public’s unfamiliarity with how satellite imagery is captured and what it actually shows.
“Many people associate the complexity involved in capturing a real satellite image with the resilience of those images against being counterfeited, but there is no such connection,” Afrik said. Social media users should remember that satellite images “are photographs just like any other and can be susceptible to the same manipulations.”
DW Fact Check examines several key examples.
Watermark reveals AI-generated satellite images
claim: In this x postA user shared a photo of what appears to be a satellite image of the Persian Gulf and alleged that it shows burning oil fields in Qatar.
DW Fact Check: Fake
While Qatar’s liquefied natural gas (LNG) facilities were indeed targeted by Iranian missiles, this image does not show its outcome. It can be easily identified as an AI fake: Gemini’s watermark can be seen in the lower-right corner.
The image mimics the texture and color of actual satellite photos – which show various landscapes, vegetation and water bodies – but the alleged fires and smoke are inconsistent with how such events look at that scale from orbit.
A reverse image search brings up several recent reposts, indicating that the image was likely created from a real satellite base layer, with AI-generated fire plumes added later.
Additionally, AI detection tools ImageWhisperer The image was flagged as potentially AI-generated with 73% confidence. However, such devices should be used with caution due to known false positives.
Iranian state media shares AI-generated “after” image of drone strike
claim: The state-affiliated English-language newspaper Tehran Times shared a post on x With satellite images of “an American radar in Qatar”. Two photos reportedly show it before and after it was destroyed in an Iranian drone strike. The post has been viewed more than 950,000 times.
DW Fact Check: Fake
Qatar is not shown in the images. The location is actually a US naval base in Manama, Bahrain.
The “before” image matches the actual image google earth capture Dated February 10, 2025, below for similar condition of vehicles.
However, the “after” image is clearly AI-generated. Building structures change shape, architectural lines appear inconsistent, and some elements are added artificially.
To complicate matters, Iran actually attacked this US base – and verified satellite images from Planet Labs and Airbus (published by The New York Times) show authentic damage.
And according to an analysis by ImageWhisperer, “The debris patterns are repetitive and lack the physical complexity of the actual explosion site, and the structural damage is not consistent with the engineering of the radar system it claims to depict.”
While the Tehran Times incorrectly listed the location, the visual similarity between the real and fake before/after sets shows how difficult it can be to identify manipulated satellite images at first glance.
Fake account calls itself Chinese intelligence company
claim: To Account Images purportedly showing burning oil fields in Qatar were posted on X, imitating Chinese company MizarVision.
DW Fact Check: Fake
Not only is this specific image fake, it has been Posted Very Times On the Internet, but there is a complete account.
MizarVision, a legitimate Shanghai-based geospatial intelligence company, publishes only on Weibo and WeChat. An account created in January, falsely claiming to be located in “Chinatown, Portland”, used the stolen logo to post images with the MizarVision watermark before being deleted.
company made public It said in February that any X (formerly Twitter) account using its name was fraudulent: “Any account appearing on
an image The fake account repeatedly showed a heavily filtered black-and-white “satellite” view of Qatar’s Ras Laffan refinery with plumes of smoke. All the blasts appear to be at approximately the same stages, indicating that they were artificially cloned.
A Search on Google Earth Turns out that the underlying image matches the actual oil tank layout – with the plumes artificially added later.
Caution suggested with satellite imagery
As satellite imagery becomes an increasingly powerful tool in both journalism and warfare, the rise of AI-manipulated visuals poses a growing challenge to public understanding. False or altered images can spread rapidly, creating stories long before experts have time to debunk them.
In an age where conflicts unfold in real time on social media, developing digital literacy – and a healthy skepticism towards dramatic “satellite” revelations – is essential. Genuine satellite data is important for documenting events, but distinguishing it from fabricated content will require vigilance from platforms, media organizations, and users alike.
Edited by: Rachel Begg
