Khaberni - Anthropik is going through a tough period, despite its reputation as a company that places safety and responsibility at the core of developing artificial intelligence technologies, as it faced two leak incidents that stirred widespread controversy in the tech circles.
The company is known for its strict stances towards artificial intelligence risks, and its intensive research in this field, as well as entering into direct discussions with governmental bodies like the Ministry of Defense about the use of these technologies.
However, a simple human error was enough to cause an unexpected crisis.
In the latest incident, the company revealed that it accidentally released a version of the Claude Code tool that included a sensitive file containing about 2000 source code files, and more than 512,000 lines of code, which represents the core structure of one of its most important products, according to a report published by "Tech Crunch".
The security researcher Shafron Sho quickly noticed the leak and published its details on the "X" platform, which contributed to the wide spread of the news.
For its part, the company downplayed the seriousness of the incident, asserting in a statement that what happened was a problem with the release package due to a human error, and not a security breach.
This development comes just days after a report published by "Fortune" magazine revealed that the company accidentally leaked about 3,000 internal files, including a draft blog post discussing a new artificial intelligence model that has not yet been announced.
The Claude Code tool is one of the company's standout products, as it allows developers to write and edit codes using artificial intelligence, and has made significant progress making it a strong competitor in the market.
According to a report by the "Wall Street Journal", this progress has led OpenAI to reassess its priorities, where it stopped developing the Sora video product after months of its launch, to focus more on developer tools and business functions.
Although the leak did not include the artificial intelligence model itself, it exposed the programming structure that determines how it works and is used, which some developers considered an opportunity for a deeper understanding of how these systems are built.
As investigations continue, the question remains about how these incidents will affect the company’s reputation, especially in the face of fast-paced competition in the artificial intelligence market, where competitors may benefit from this information, even if its impact is limited in the long term.



