X will suspend creators from its revenue-sharing program for 90 days if they post AI-generated videos of armed conflict without clear disclosure. The change took effect immediately, according to a post by Nikita Bier, the company’s head of product.
Bier wrote:
"Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program." He added, "During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people."
Under the updated policy, creators who publish AI-generated conflict footage without stating that it was made with artificial intelligence will lose access to revenue sharing for 90 days. Repeat violations will lead to permanent removal from the program.
The rule targets monetization eligibility rather than account suspension. X has not announced a broader ban on AI-generated content. The policy applies specifically to videos that depict armed conflicts.
Enforcement linked to community notes and metadata
Enforcement will rely on user-driven and technical signals.
"This will be flagged to us by any post with a Community Note or if the content contains meta data (or other signals) from generative AI tools," Bier wrote.
The Community Notes system allows contributors to add contextual information to posts that may mislead users. Under the new approach, a note that identifies a video as AI-generated could trigger a monetization review. Metadata and other technical indicators tied to generative AI tools may also lead to enforcement.
Policy shift follows rising war misinformation concerns
The announcement comes amid heightened geopolitical tensions and concerns about manipulated battlefield footage circulating online. On Feb. 28, the United States and Israel launched joint airstrikes on Iran. Digital asset markets reacted within hours. Bitcoin fell to about $63,000 before a recovery toward $70,000, according to CoinGecko data.
Online platforms faced intense scrutiny after synthetic war videos and altered images spread during previous conflicts. Generative AI tools now allow users to produce realistic combat footage with limited resources. Bier described the ease of misuse in his post, stating that AI makes it trivial to create misleading content.
The rule introduces a financial deterrent. Creators who rely on revenue sharing must now consider disclosure requirements when posting AI-generated war footage. The company has not indicated changes to its broader moderation framework beyond monetization eligibility.
Wider regulatory and industry context
Governments and civil society groups have pressed technology platforms to address manipulated media during geopolitical crises. The industry has struggled to balance open expression with safeguards against deception. X’s move focuses on economic incentives tied to engagement and virality.
The announcement also arrives during a period when artificial intelligence tools play a role in military and intelligence operations. On March 1, the U.S. military used Anthropic’s Claude AI model to assist with intelligence analysis and targeting during operations linked to the Iran strikes, according to the Guardian report.
The convergence of AI-generated media and real-world conflict has raised questions about verification standards. Platforms face pressure to prevent deepfakes and fabricated combat footage from distorting public understanding. X’s latest update addresses one part of that challenge through monetization policy rather than direct content removal.
Bier framed the change as part of a broader trust effort. His statement links authenticity on the Timeline with program integrity. The company has not outlined further revisions but indicated that policy adjustments may continue.
For creators who produce synthetic media for commentary or artistic purposes, disclosure now carries direct financial consequences. For X, the measure tests whether monetization rules can curb misleading war content without imposing blanket bans.

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that, despite the nature of much of the material created and hosted on this website, HODL FM operates as a media and informational platform, not a provider of financial advisory services. The opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice, HODL FM strongly recommends contacting a qualified industry professional.





