upvote
In practice I’ve found that property based testing has a very high ratio of value per effort of test written.

Ui tests like:

* if there is one or more items on the page one has focus

* if there is more than one then hitting tab changes focus

* if there is at least one, focusing on element x, hitting tab n times and then shift tab n times puts me back on the original element

* if there are n elements, n>0, hitting tab n times visits n unique elements

Are pretty clear and yet cover a remarkable range of issues. I had these for a ui library, which came with the start of “given a ui build with arbitrary calls to the api, those things remain true”

Now it’s rare it’d catch very specific edge cases, but it was hard to write something wrong accidentally and still pass the tests. They actually found a bug in the specification which was inconsistent.

I think they often can be easier to write than specific tests and clearer to read because they say what you actually are testing (a generic property, but you had to write a few explicit examples).

What you could add though is code coverage. If you don’t go through your extremely specific branch that’s a sign there may be a bug hiding there.

reply
An important step with property based testing and similar techniques is writing your own generators for your domain objects. I have used to it to incredible effect for many years in projects.

I work at Antithesis now so you can take that with a grain of salt, but for me, everything changed for me over a decade ago when I started applying PBT techniques broadly and widely. I have found so many bugs that I wouldn't have otherwise found until production.

reply
"Exhaustively covering the search space" or "hitting specific edge cases" is the wrong way to think about property tests, in my experience. I find them most valuable as insanity checks, i.e. they can verify that basic invariants hold under conditions even I wouldn't think of testing manually. I'd check for empty strings, short strings, long strings, strings without spaces, strings with spaces, strings with weird characters, etc. But I might not think of testing with a string that's only spaces. The generator will.
reply
One of the founders of Antithesis gave a talk about this problem last week; diversity in test cases is definitely an issue they're trying to tackle. The example he gave was Spanner tests not filling its cache due to jittering near zero under random inputs. Not doing that appears to be a company goal.

https://github.com/papers-we-love/san-francisco/blob/master/...

reply
Glad you enjoyed the talk! Making Bombadil able to take advantage of the intelligence in the Antithesis platform is definitely a goal, but we wanted to get a great open source tool into peoples’ hands ASAP first.
reply
One thing you can find pretty quickly with just basic fuzzing on strings is Unicode-related bugs.
reply
[dead]
reply