upvote
LLM training is unnecessary in what we're discussing. Merely LLM using: original code -> specs as facts -> specs to tests -> tests to new code.
reply
It is hard to prove that the model doesn't recognize the tests and reproduces the memoized code. It's not a clean room.
reply