The leak of Claude Code’s supply is already having penalties for the software’s safety. Researchers have noticed a vulnerability documented within the code.
The vulnerability, revealed by AI safety firm Adversa, is that if Claude Code is introduced with a command composed of greater than 50 subcommands, then for subcommands after the fiftieth it’ll override compute-intensive safety evaluation that may in any other case have blocked a few of them, and as a substitute merely ask the consumer whether or not they need to go forward. The consumer, assuming that the block guidelines are nonetheless in impact, might unthinkingly authorize the motion.
Extremely, the vulnerability is documented within the code, and Anthropic has already developed a repair for it, the tree-sitter parser, which can also be within the code however not enabled in public builds that clients use, stated Adversa.


