*
Sunday: 12 April 2026
  • 12 April 2026
  • 09:18
OpenAI plans to launch a new cybersecurity product But why is it afraid to release it

Khaberni - OpenAI is currently finishing the development of an artificial intelligence product with advanced capabilities in cybersecurity, and intends to launch it for a limited group of partners, according to what Axios reported from an informed source.

This upcoming product from "OpenAI" is named "Spud".


It is still unclear what the cyber capabilities of this model are and how "OpenAI" plans to launch it, according to a report by Axios, reviewed by "Al Arabiya Business".

Increasing risks and limited release
This comes at a time when artificial intelligence capabilities have reached a crucial turning point, at least in terms of autonomy and hacking abilities, making AI developers now extremely concerned about the damage their tools could cause, to the extent that they are hesitant to launch them on a wide scale.

Anthropic, a competitor of "OpenAI", also plans for a limited launch of its new model "Mythos", which it revealed last Tuesday, to a selected group of technology and cybersecurity companies, because of fears over its advanced hacking capabilities.

Former government officials and senior security leaders have sounded the alarm over the past year about AI models that, if fallen into the wrong hands, could independently disrupt water utilities, power networks, or financial systems someday.

It seems that these capabilities have now become a reality.

Even if AI companies block their models or release them in a limited way, senior security experts agree on one message: There is no turning back.

Rob T. Lee, the chief AI officer at the SANS Institute for Cybersecurity, said: "You can't prevent models from exploring code or finding vulnerabilities in old programming bases," adding: "This capability already exists."

Wendy Whitmore, the chief intelligence officer for security at Palo Alto Networks, told Axios during a discussion session at the "HumanX" conference in San Francisco on Tuesday, that it will only be weeks or months before a new model with similar capabilities emerges.

Adam Myers, Senior Vice President of Adversary Counteroperations at CrowdStrike, described the capabilities of the "Mythos" model as an "alarm bell" for the entire sector.

Logical step
Stanislav Fort, the CEO of cybersecurity company "Aisle", told Axios, that restricting the launch of a new leading model is "more logical" if companies are worried about the models' ability to innovate new vulnerabilities, rather than their ability to discover errors in the first place.

Lee added that what is interesting is that the phased rollout of new AI models is very much like the way cybersecurity companies currently uncover security vulnerabilities in software.

He said: "This is the same debate we've had for decades about responsible disclosure of vulnerabilities."

Researchers at "AISLE" found on Wednesday that widely available AI models are indeed capable of detecting some vulnerabilities and exploitation vectors discovered by the "Mythos" model.

Topics you may like