*
السبت: 27 ديسمبر 2025
  • 25 December 2025
  • 16:08
Misleading Artificial Intelligence Videos Flood Social Media Sites

Khaberni - In October last year, a video circulated on the “Tik Tok” app showing a woman conducting an interview with a TV correspondent about the sale of food vouchers (which are granted in America to low-income individuals), as written by Stephen Lee Myers and Stuart A. Thompson(*).

The woman was not real, and this conversation never actually happened; the video was generated by artificial intelligence. However, it seemed that people believed this was a real conversation about selling food vouchers for money, which is considered a crime.

Fox News” was also deceived by a similar fake video, considering it an example of public anger regarding the misuse of food vouchers, in an article that was later deleted from their website.

Videos, such as the fabricated interview created using the new “Sura” app from “Open AI”, show how easy it is to manipulate public opinion, with tools capable of creating an alternative reality through a series of simple commands.

In the last two months since the launch of “Sura”, misleading videos have spread significantly on platforms like “Tik Tok”, “X”, “YouTube”, “Facebook”, and “Instagram”, according to experts specializing in monitoring them. This widespread dissemination has raised concerns about a new generation of misinformation and fake news.

Most major social media companies adopt policies mandating the disclosure of artificial intelligence usage, and generally prohibit content intended to mislead. However, these controls have proven largely inadequate in keeping up with the tremendous technological advancements represented by “Open AI” tools.

While many videos offer satirical or fabricated images of children and pets; some aim to fuel the hatred that often dominates online political discussions. These clips have already been used in foreign influence operations.

Researchers tracking deceptive uses say the responsibility now falls on companies to make more effort to ensure people know what is real and what is fake.

Sam Gregory, the executive director of the human rights organization “Witness” which focuses on the risks of technology, asked: “Can they improve content management and combat misinformation and falsehoods? They clearly are not. Can they make more effort in proactively researching and categorizing information generated by artificial intelligence themselves? The answer is: Yes – meaning they are not doing it.”

The fabricated videos were used not only to mock the poor, but also President Trump.

So far, these platforms have heavily relied on content creators to disclose that the content they publish is not real, but they do not always do so. Despite there being ways for platforms, like “YouTube” and “Tik Tok” and others, to detect that a video is fabricated using artificial intelligence, they do not always immediately alert viewers.

Nabiha Syed, the executive director of “Mozilla”, a non-profit organization focused on tech safety and supporting the “Firefox” browser, commented on social media companies: “They should have been prepared.”

The companies developing artificial intelligence tools strive to clarify the nature of the computer-generated content for users. Both “Sura” and its competitor “View” offered by “Google”, put a visible watermark on videos they produce. For example, “Sura” places the “Sura” mark on each video. Additionally, the companies add non-visible metadata, readable by computer, to identify the source of each fake video.

The idea is to inform users that what they are watching is not real, and to provide platforms displaying these videos with the digital signals needed to detect them automatically.

Some platforms use this technology; “Tik Tok” announced last week, apparently in response to concerns about how convincing fake videos are, that it will tighten its rules on disclosing artificial intelligence use. It promised new tools that enable users to determine how much synthesized content – compared to real content – they wish to view.

“YouTube” uses the invisible watermarking technology of “Sura” to add a small label indicating that the artificial intelligence videos are “modified or synthesized”.

Jack Malone, a spokesman for “YouTube”, said: “Viewers increasingly want more transparency about whether the content they view is modified or synthesized.”

However, the labels sometimes appear only after the videos have been viewed by thousands or even millions of people. Sometimes, they do not appear at all.

Malicious actors have discovered how easy it is to circumvent disclosure rules. Some simply ignore them, while others manipulate videos to remove identifying watermarks. The “New York Times” found dozens of examples of “Sura” videos on “YouTube” without the automatic label.

Several companies offer services to remove logos and watermarks. Editing or sharing videos may also lead to the removal of embedded metadata in the original video that indicates it was created using artificial intelligence.

Even when logos remain visible, users browsing quickly on their phones might not notice them.

According to an analysis of comments conducted by the “New York Times”, which used artificial intelligence tools to help categorize comment content, nearly two-thirds, from more than 3000 users who commented on a “Tik Tok” video about food vouchers, treated it as if it were real.

In its statement, “Open AI” said that it prohibits deceptive or misleading uses of “Sura”, and takes actions against policy violators. The company added that its app is just one of many similar tools capable of creating increasingly realistic videos, many of which apply no safeguards or restrictions on usage.

A spokesperson for “Meta”, the owner of “Facebook” and “Instagram”, said that it is not always possible to categorize all videos generated by artificial intelligence, especially with the rapid advancement of this technology. He added that the company is working on improving content classification systems.

The platforms “X” and “Tik Tok” did not respond to requests for comment on the spread of fake videos generated by artificial intelligence.

Alon Yamin, the CEO of “Copyleaks”, a company focused on detecting AI content, said that social media platforms have no financial incentive to restrict the spread of video clips as long as users continue to click on them.

He added: “In the long term, when 90 percent of the traffic on your platform's content is generated by artificial intelligence, it raises questions about the quality of the platform and content.. So perhaps in the long term, there may be bigger financial incentives to actually regulate AI content. But in the short term, it is not a major priority.”

Topics you may like