- none of the "final" fields have changed after calling each method
- these two immutable objects we just confirmed differ on a property are not the same object
In addition to multiple tests with essentially identical code, multiple test classes with largely duplicated tests etc.0: https://www.codewithjason.com/examples-pointless-rspec-tests...
it { expect(classroom).to have_many(:students) }
If I catch them I tell them not to and they remove it again, but a few do end up slipping through.I'm not sure that they're particularly harmful any more though. It used to be that they added extra weight to your test suite, meaning when you make changes you have to update pointless tests.
But if the agent is updating the pointless tests for you I can afford a little bit of unnecessary testing bloat.
Admittedly, in the absence of halfway competent static type checking, it does seem like a good way to prevent what would be a very bad regression. It doesn’t seem worse than tests which check that a certain property is non-null (when that’s a vital business requirement and you’re using a language without a competent type system).
* no-op tests
* unit tests labeled as integration tests
* skipped tests set to skip because they were failing and the agent didn’t want to fix them
* tests that can never fail
Probably at any given time the tests are 2-4% broken. I’d say about 10% of one-shot tests are bogus if you’re just working w spec + chat and don’t have extra testing harnesses.
Worse: once you have one "bad apple" in your pile of tests, it decreases trust in the _whole batch of tests_. Each time a test passes, you have to think if it's a bad test...
Many times I've observed that the tests added by the model simply pass as part of the changes, but still pass even when those changes are no longer applied.
It can still cheat, but it's less likely to cheat.
I have a hard enough time getting humans to write tests like this…