[TIP] Thoughts on Fuzz Testing
jonasthiem at googlemail.com
Tue Oct 20 17:42:26 PDT 2015
Just chiming in on the fuzzing tests in regular testing suite thing:
On Wed, Oct 21, 2015 at 1:43 AM, Mark Waite <mark.earl.waite at gmail.com> wrote:
> If no one is watching the automated tests for success and failure, then
> automated execution of randomization tests is not helpful, since
> randomization tests won't fail the same way on each test run. However, if
> no one is watching the tests for success and failure, why run the tests?
I think a common reason to have unit tests is to ensure that no
regression happens, so people have some automated build system (e.g.
jenkins) flare a red light at them when something fails so they know
their recent commit(s) broke something.
In that scenario as opposed to running a fuzzing test manually on a
local computer, it might be a bit hard to see afterwards in sufficient
detail what went wrong depending on how much data is covered during
the test by the build system (but that's usually only the standard
output text as far as I'm aware), while when running a fuzzing test
locally you are usually prepared to gather and observe more data or
e.g. you run the program in the debugger to start with.
So I guess that might explain why some devs don't add it to the
regular unit test line-up, especially since fuzzing is also of limited
use if you don't leave it running for a longer amount of time, which
might be a problem if your project is huge and has a huge amount of
regular regression tests that take a long time already. (you want the
run to complete in average before the next commit arrives, after all)
I think many devs will still prefer to simply run the fuzzing manually
for a longer amount of time in some observed manner on their local
machine for those reasons. I'm not saying having fuzzing in an
automated test suite is wrong, just adding my two cents that I
understand if some folks recommend it's not that practical for some
More information about the testing-in-python