[TIP] Thoughts on Fuzz Testing

David MacIver david at drmaciver.com
Wed Oct 21 01:41:21 PDT 2015


I've spent most of the last year arguing literally the exact opposite of
every single one of these points as part of my work on Hypothesis (
http://hypothesis.readthedocs.org/), so obviously I have a few opinions
here. ;-)

> Fuzz tests are useful as tools for a developer to run manually

Almost all successful security oriented fuzzing projects have resulted in
burning thousands of CPU hours or more (Google's work on FFMPEG literally
used a CPU-millenium - two years continually running on 500 cores). A
trivially automatable system which is actively going out looking for bugs
in your code is not something you should be running manually on an ad hoc
basis, it should have associated infrastructure.

> which help identify corner cases in the code not covered by
explicit-branch-based-unit-testing (feel free to help me with the term I
want here).

OK, I agree with this part. :-) My experience is that the percentage of
projects that don't find bugs they've previously missed by adding
Hypothesis or similar tooling to them is so close to zero as to be a
rounding error.

> They should not run by default in an automated testing suite (for
performance & time considerations).

As per above, regularly running fuzzing is a great way to find new bugs,
and it's very hard to do this in a way that adapts well to changes in your
code unless you are running it as part of your CI.

Moreover, fuzz tests are *great* at finding unanticipated bugs in your
code, so not running them as part of your automated testing is basically
just asking for bugs to creep in that you'll find at a later date instead.

I also don't think the premise of performance and time considerations is
really valid - most of the time you save a lot of time by spending a little
more time testing - but if it is then you can tune it down until it's not.
Fuzz testing intrinsically comes with a "How long do you want me to spend
on this?" dial attached to it.

IMO, the optimal solution is to have a pre-built corpus of test cases which
you run as part of your automated testing and then do a small quantity of
additional fuzz testing on top of that. Right now the tools for doing this
well in Hypothesis are quite manual unfortunately - making it more
automated is on my list of planned future work - but corpus based tools
like AFL you can basically do it out of the box.

In the meantime, adding fuzzing to your normal automated workflow is a)
Something you can do with almost no initial overhead in the workflows for
separate testing infrastructure and b) Results in your CI becoming an
active participant in the bug finding process. It's worth doing.

> They should not be used as an excuse for lazy developers to not write
explicit tests cases that sufficiently cover their code.

I think this one is not only wrong, but harmfully wrong. I utterly reject
its premise. Treating hard work as having some sort of intrinsic moral
worth generally leads you down paths of bad design and pointless time
wasting and should be resisted wherever possible. Work is valuable because
it achieves useful results, not because it is work.

The optimal solution is to get better results with less work, and fuzz
testing tools let you do that. This lets you write higher quality software,
either because you've written more tests in the same amount of time or
because you've spend less time and got the same quality and results and got
to use the remaining time to focus on other things. (cf
http://www.drmaciver.com/2015/10/the-economics-of-software-correctness/)

In general, I think this entire thesis starts from a premise of fuzz
testing being something that is kinda useful but you can just throw at your
program every now and then to see if it breaks. Instead I would like people
to integrate it as part of their normal testing work flow, and the results
I've seen so far from people doing so seem to resoundly back this up. You
get cleaner, more maintainable and more comprehensive test suites as a
result of it, and a degree of software correctness that was nearly
unattainable previously becomes really quite accessible.

Regards,
David

On 20 October 2015 at 23:03, Randy Syring <randy at thesyrings.us> wrote:

> I recently had a chat with my team about fuzz testing.  The thesis as
> proposed is:
>
> Fuzz tests are useful as tools for a developer to run manually which help
> identify corner cases in the code not covered by
> explicit-branch-based-unit-testing (feel free to help me with the term I
> want here).  They should not run by default in an automated testing suite
> (for performance & time considerations).  They should not be used as an
> excuse for lazy developers to not write explicit tests cases that
> sufficiently cover their code.
>
>
> I'm in interested in feedback on the above.  Agree, disagree, and most
> importantly, why.
>
> Thanks.
>
> *Randy Syring*
> Husband | Father | Redeemed Sinner
>
>
> *"For what does it profit a man to gain the whole world and forfeit his
> soul?" (Mark 8:36 ESV)*
>
>
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.idyll.org/pipermail/testing-in-python/attachments/20151021/72c230ca/attachment.html>


More information about the testing-in-python mailing list