AI Deepfake Videos About War Spread Quickly On X Platform
Videos made by artificial intelligence about the Iran-US war are going viral on the social media site X. These fake videos show shocking things like American soldiers being captured, cities being destroyed, and diplomatic buildings being set on fire. Generative AI technology is getting better at making fake videos look real, which makes it hard for many people to tell the difference.
Researchers say that there is a lot more AI content about war right now than there was during past wars. The number of fake images and videos going around on social media has been too much for traditional fact-checking systems and verification efforts to handle. Experts say that this trend could make it much harder for people to understand what’s going on in ongoing geopolitical conflicts around the world.

Source: Daily Sabah
X Makes A Policy To Stop AI War Misinformation
X, a social media site, recently put in place a policy that targets war videos made by artificial intelligence. If creators don’t label AI content correctly, they could be kicked out of the revenue sharing program. Company leaders say that accounts that break the rules may not be able to make money for up to 90 days.
If you break the rules too many times, you may be banned from the platform’s monetization system for good. The company says that the new rule is meant to keep real information safe during conflicts that are still going on. Platform officials say that the measure could stop creators from posting fake AI-generated war images.
Researchers Say Fake War Content Still Floods Social Media
Even though the policy was announced, researchers who keep an eye on false information online say that AI war content is still very common. Experts who keep an eye on social media say that fake images and videos about the conflict are being uploaded all the time. A lot of these posts come from premium accounts that are easy to find and have verification badges.
Joe Bodnar from the Institute for Strategic Dialogue said that the policy hasn’t done much to stop false information from spreading. Bodnar says that many false AI posts are still getting millions of views on the site. Analysts think that financial incentives may still push creators to make sensational content.
Recommended Article: AI Deepfakes Flood Online Media During Iran War Crisis
Premium Accounts Continue Posting Misleading AI Content
A lot of the false posts come from premium accounts that can make money through engagement revenue. Users can earn money from these accounts based on how many people see them, how many people interact with them, and how engaged they are overall. Critics say that this structure makes it easier for people to make sensational or false content without meaning to.
Researchers found that AI videos of destroyed cities, captured soldiers, and fake missile attacks were spreading quickly. Some posts even showed dramatic situations, like nuclear strikes on big cities. These posts often get more attention than official warnings about false information.
Fact Checkers Struggle To Keep Up With AI Content
Fact-checking groups all over the world say it’s hard to keep up with the quickly growing number of AI-made war images. AFP’s network of fact-checkers found a lot of fake videos that were going around in many languages and regions. These videos often mix real footage with fake scenes, which makes it harder to check them.
AI can now make content much faster than professional verification teams can check it. Before fact-checkers can publicly identify them, many false videos get millions of views. This delay lets false information spread widely before corrections get to people on social media.
Platform Algorithms May Incentivize Sensational War Posts
Researchers also say that the platform’s monetization system may give creators who post dramatic content an indirect reward. X’s creator payment program lets accounts with a lot of engagement make money from ads. Because of this, shocking AI-generated war images may get more money.
Some premium accounts didn’t respond to requests to label AI-generated images, even after the platform warned them. 1 example was a fake video showing Dubai’s Burj Khalifa on fire. The post stayed up and got more than 2 million views in a short amount of time.
Experts Say Policies May Reduce But Not Eliminate Fakes
Experts think that the platform’s new policy could lower the financial reasons behind some campaigns that spread false information. But they say that the system may still have trouble being enforced in real life for millions of users. It is still hard to find manipulated content because metadata that points to AI content can be easily deleted.
Researchers are also unsure if the Community Notes verification feature on the platform works. Studies show that a lot of the notes that are sent in never show up on disputed posts. Analysts say that to fight AI misinformation, we need better moderation systems and more cooperation around the world.













