Khaberni - Amandeep Gill, the UN Secretary-General's Deputy, and the Special Envoy on Technology, confirmed that the uncontrolled development and use of artificial intelligence could lead to the emergence of weapons with an unacceptable degree of autonomy.
Gill said that artificial intelligence is a powerful technology that can accelerate the development of current weapon systems, and we may see in the future systems that feature a higher degree of autonomy than current automated systems.
According to the UN official, governments should maintain human control over weapons and ensure the accountability and responsibility of the military personnel who make the decision of "choosing targets".
The question then arises, how can we work to ensure that this technology does not become another factor destabilizing stability? What are the risks of uncontrolled use of military technology? How can its risks be avoided, and what are its destructive and military powers?
A complex challenge
Dr. Mohamed Mohsen Ramadan, Head of the Artificial Intelligence and Cybersecurity Unit at the Arab Center for Research and Studies, tells "Al Arabiya.net/Al Hadath.net" that the UN warning is not just a political statement, but an early warning of a new phase of an arms race based on algorithms capable of making lethal decisions independently, and the transition toward autonomous decision-making weapons represents a complex challenge that combines cybersecurity, military technology, international humanitarian law, and the stability of the international order.
He continued: The real threat lies in the nature of artificial intelligence itself, as the artificial intelligence used in modern weapon systems relies not only on direct programming but on self-learning neural networks and algorithms that make decisions in unstable environments and sensor systems that rely on potentially misleading data.
An offensive tool for the adversary
He added that with the shift from "human support" models to "increasing autonomy" models, there is a risk of having a weapon capable of making an offensive decision without precise human oversight, which may lead to military outcomes that are inconsistent with political intent or rules of engagement; Secondly, the technical and security risks associated with autonomous decision-making weapons, and their vulnerability to cyber intrusions, as intelligent combat systems entirely depend on a digital framework of control algorithms, navigation systems, databases, and communication networks.
He stated that any hacking or manipulation of these components could lead to a change in the weapon's course and redirect it to kill civilian targets and disrupt the self-safety system, executing unauthorized offensive operations, thereby turning the weapon from a defensive tool into an attack platform for the adversary.
Loss of human control
Ramadan clarified that as for adversarial manipulation, adversaries can deceive artificial intelligence systems with precisely altered images, misleading electronic signals, and false data entered into the system during operation, and it has been scientifically proven that algorithms can make wrong decisions at a high rate when faced with minor, invisible changes to the human eye, noting the so-called "loss of human oversight".
Ramadan revealed that a self-decision weapon shortens the decision-making cycle from: "monitoring, analyzing, assessing the situation, deciding, and launching" to just an instant automated decision; this transition may lead to unintended escalation, striking unapproved targets and losing the ability to intervene during an operational malfunction and decisions that are offensive and contrary to international humanitarian law.
He added: There is a significant axis, which is the legal accountability gap when a smart weapon incorrectly distinguishes between a combatant and a civilian, the question remains: who is responsible, the military commander or the developing company or the programmer? Or the system itself? And this gap represents a direct threat to the international justice system.
Ban on autonomous weapons
Dr. Mohsen Ramadan recommends the necessity of maintaining the condition of "full human control" (Human-in-the-loop); a human must remain the final decision-making authority in the use of lethal force, through reviewing coordinates, verifying the nature of the target, manually activating the launch, and having the ability to stop before execution. It is also essential to establish an international binding framework to regulate autonomous decision-making weapons, while he confirms his support for the UN's move to ban weapons making independent destruction decisions by 2026, with the necessity of adopting uniform standards for degrees of autonomy and establishing international inspection mechanisms and registering military smart systems in a UN database.
For his part, General Adel Al-Amoudi, lecturer at the Military Academy for Higher Studies and Strategy, says: These weapons represent one of the negative outcomes of technological development, as the decision requires an assessment of the situation, and here there are no conditions, tools, or mechanisms that authorize making this decision; the situation assessment mechanism is missing in high-tech artificial intelligence weapons.
He pointed out that he agrees with the UN's demands to make a decision to prohibit this matter and not to authorize it; there must be no autonomy and complete restriction of these weapons, as full freedom would lead to significant losses that could lead to the annihilation of the world. Intelligent weapons represent a danger to human societies, peoples, and stability worldwide.




