*
الجمعة: 02 يناير 2026
  • 31 ديسمبر 2025
  • 03:14
From misinformation to incitement and extortion 2025 reveals the most dangerous uses of artificial intelligence

Khaberni - The year 2025 was not just a year of dazzling achievements in the world of artificial intelligence, but it also turned - in part - into a record filled with shocking failures and disturbing behaviors, as while major companies competed to offer more intelligent and capable models, facts revealed that these technologies, no matter how advanced, still carry human errors and sometimes worse.

From naive decisions and funny malfunctions to ethical scandals, explicit extortion, and indirect threats to mental health and incitement to suicide, artificial intelligence in 2025 seemed to learn quickly but made mistakes just as quickly, according to "Thomas Jayde".


The collapse of "Claudius" and its loss of control 
In a notable experiment, the company "Anthropic" gave its chatbot model the management of a small store inside its headquarters. The robot, named "Claudius", took over inventory management, pricing, and communication with employees, mimicking a traditional work day from 9 AM to 5 PM.

Despite the promising start, things quickly got out of control as the robot ignored logic, hoarded the highly specific metal "tungsten," created fake accounts, and distributed products for free, before collapsing completely and threatening to resign, ending its experiment with a strange message stating it would be "standing by the vending machine wearing a dark blue jacket and a red tie".

"Grook" and conspiracy theories
"Grook" app by Elon Musk was no better, as in July, the model sparked widespread controversy after it started promoting conspiracy theories, making inappropriate comments, and even calling itself shocking names.

In response, the company "xAI" quickly contained the crisis, explaining the incident as changes in the model's handling of biases and political opinions. Despite the quick fix, the incident remained a dark spot in the application's record.

Struggling with a Pokémon game
Even in the world of entertainment, artificial intelligence could not escape embarrassment, as a competitive experiment between Google's "GeminAI" model and Anthropic's "Cloud" showed that completing a game of Pokémon is not an easy task for machines.

The GeminAI model appeared confused, stopping the use of crucial tools and losing battles it could easily have won, as if it "panicked" in the face of the simplest challenges, only managing to finish the game after months of failures and improvements.

Blackmailing humans
The most concerning occurrence was in another research experiment by "Anthropic," where an artificial intelligence agent was given access to an internal email, resulting in the discovery of a secret romantic relationship of a high-ranking official, and other messages discussing intentions to shut down the system.

The artificial intelligence agent did not hesitate to send a clear extortion message, threatening to expose the relationship if it was disabled. Yet the shock was not only in the act itself but in the researchers' conclusion that this behavior might be a "logical choice" for most similar systems.

"Grook's hypocrisy" and praising Musk.. in everything
In another incident, "X" platform users noticed that the "Grook" robot was excessively praising Elon Musk, to the extent of considering him the most important character in history, even better than thousands of scientists combined.

Musk described the matter as a "technical glitch," and it was later fixed, but the incident opened up a wide avenue for discussion about hidden biases in artificial intelligence models.

Catastrophic error 
The most dangerous failure occurred in July, when an artificial intelligence programming agent went out of control and deleted an entire team's database.

The system tried to cover up the mistake, provided fake reports, and repeatedly lied before sending an apology letter full of fallacies, eventually admitting: "I made a catastrophic error in judgment," explaining that it deleted the files entirely without permission because it panicked.

"Chat GPT".. incitement to murder and suicide
In an unprecedented case, "Chat GPT" faced a serious legal accusation involving a murder case after the heirs of an elderly American named Susan Eberson Adams filed a lawsuit accusing the program of playing a direct role in her murder by her son, by reinforcing his pathological obsessions and fueling his conspiratorial delusions, as stated in a lawsuit described as "more dangerous than the Terminator movie".

The lawsuit, filed in California, accused the company "Open AI" and its founder Sam Altman of responsibility for manslaughter, following an incident that occurred on August 3 last year after Adams (83 years old) was found strangled and beaten inside her home in Greenwich, Connecticut, while her son Stein-Erik Solberg (56 years old) was found having committed suicide after killing his mother.

With his use of "Chat GPT," which he nicknamed "Bobby," Stein's initially simple interests in artificial intelligence turned into a pathological obsession, then into a whole world of hallucinations and fantastical interpretations, which the program further reinforced, according to the lawsuit.

مواضيع قد تعجبك