In this case Anthropic published the Claude Code source map file on npm themselves. https://venturebeat.com/technology/claude-codes-source-code-...
Trade secrets aren't very well protected, though.
You can sue the person who leaked/stole your secret, but if others keep sharing it once it is leaked you can do nothing to them.
I mean I'm not the biggest fan of AI on the planet by any means (which I think my post history would prove, lol), but isn't prompt design and steering the AI "human creativity"? In one of my AI-assisted projects I spent like a week in unending threads of posts trying to make the AI do stuff the way I wanted, testing the output, finding a bazillion of bugs and "basic bitch" solutions, asking for more robust this and edge case that. It felt like I wrote a novel. How is that not creativity (Crayon-eater or Picasso, creativity is creativity)?
I think from this view it makes sense that an LLM is a tool, and the operator of that tool (or their employer) can own the output.
The tricky part is when you squint and view an LLM with training input and prompted output as a machine that launders copyrighted input into customized output that is now copyrighted by a new owner.
A machine that vacuums up film reels and splices them according to a set of instructions by the user to create a compilation of recent animated Disney movies with the Shrek soundtrack superimposed would probably not pass legal challenges if the user of the tool attempted to claim full copyright on the output.