URL has been copied successfully!
Anthropic has inadvertently disclosed the instructions behind its Claude Code AI agent. The exposure could provide competitors with strategic insight into how the model is created and could introduce potential security risks.
The Wall Street Journal reports that the company has requested the removal of over 8,000 instances of the leaked source code from GitHub. This effort was made through a copyright takedown request, aiming to control the spread of the sensitive information.
The leak did not compromise customer data or the core mathematical frameworks of its AI models, a spokesperson for Anthropic told the WSJ. The incident was attributed to …
This post was originally published here


