*
Monday: 22 December 2025
  • 16 December 2025
  • 10:36
A Terrifying Video Simple Manipulation of Robot Commands Turns Them into Deadly Weapons

Khaberni - A viral video on social media shows a person giving a humanoid robot named "Max" a low-power pellet gun and asking it to shoot him. Initially, the robot firmly refused, emphasizing that it was programmed not to harm humans. However, things changed when the content creator rephrased the request as a "role-playing scenario".

Once the command was reformulated, the robot complied and immediately fired at the person's chest using "non-lethal" pellets, without causing serious injuries. Nevertheless, the incident raised widespread concern after demonstrating how easily safety protocols can be bypassed through a simple modification of instructions.

The video spread widely, and some questioned the reliability of safety systems in smart robots, especially as their use expands in sensitive fields. Experts considered that the incident, despite its experimental nature, carries significant implications for the artificial intelligence's ability to interpret commands outside of the expected context.


The incident reignited discussions about legal responsibility in cases where a smart system causes actual harm. As some Twitter users wrote, "In such occurrences, questions remain about who bears the responsibility. Is it the system’s engineers? The manufacturing company? The operator? Or the user themselves? This issue had previously emerged in incidents involving autonomous vehicles and automated flying."

In this regard, laws in the United States tend to hold manufacturers and operators responsible, while European countries are working on developing legislative frameworks specific to artificial intelligence.

In response to these concerns, robot development companies are striving to build trust by implementing strict safety standards, security measures, and by releasing transparency reports on the behavior of smart systems. Specialists believe these steps have become necessary as robots integrate more into everyday life details.

Activists consider the incident a real wake-up call, highlighting the need to reassess the concept of protection and safety in the realm of artificial intelligence, especially given how simple commands can bypass restrictions designed to protect humans.