<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Thanks everyone for the great responses! We've benefited from
reading the responses.<br>
<br>
> Almost all successful security oriented fuzzing projects have
resulted in burning thousands of CPU hours or more<br>
<br>
I guess, I should have stated at the beginning the context that our
fuzzy testing will be in. We are coding mostly web applications
with a user base of 1000 or less. It's important for our customer
relationships that we balance how much time we put into testing.
I'm sure our code bases could be more thoroughly tested, but the
reality is that in our use cases, extremely thorough tests have
rapidly declining ROI. 80/20 rule, etc.<br>
<br>
So, the real heart behind this question is: "In our use cases, when
does fuzzy testing bring genuine value?"<br>
<br>
> IMO, the optimal solution is to have a pre-built corpus of test
cases which you run as part of your automated testing and then do a
small quantity of additional fuzz testing on top of that.<br>
<br>
This was a really helpful distinction! The main concern I had was
our automated testing running really long b/c of fuzz testing and
therefore delaying the quick feedback cycle we want our developers
to benefit from. Having two automated process, one for each type of
testing, makes a lot of sense.<br>
<br>
<blockquote type="cite">
<div>> They should not be used as an excuse for lazy developers
to not write explicit tests cases that sufficiently cover their
code.</div>
<div><br>
</div>
I think this one is not only wrong, but harmfully wrong. I utterly
reject its premise.</blockquote>
<br>
I'm surprised by the strength of the reactions to this statement. :)<br>
<br>
FWIW, the "lazy programmers" phrase wasn't meant to be pejorative.
It reflected the fact that we do often make engineering tradeoffs
and it's possible that if we have very good fuzzy testing some
developers in some situations may decide they don't need to do
explicit testing as thoroughly. It is a tradeoff I could see myself
considering in some situations and the statement was written
originally to reinforce the idea that the answer to this tradeoff is
"don't do it."<br>
<br>
<blockquote type="cite">In general, I think this entire thesis
starts from a premise of fuzz testing being something that is
kinda useful but you can just throw at your program every now and
then to see if it breaks. Instead I would like people to integrate
it as part of their normal testing work flow, and the results I've
seen so far from people doing so seem to resoundly back this up.
You get cleaner, more maintainable and more comprehensive test
suites as a result of it, and a degree of software correctness
that was nearly unattainable previously becomes really quite
accessible.</blockquote>
<br>
Ok, I'm convinced. :) Now it's just a matter of figuring out
where, in our use cases, it fuzz testing brings genuine ROI. And
that will just take time for us to work out.<br>
<br>
<div class="moz-signature"><br>
<b>Randy Syring</b><br>
<small>Husband | Father | Redeemed Sinner</small><br>
<br>
<i><small>"For what does it profit a man to gain the whole world<br>
and forfeit his soul?" (Mark 8:36 ESV)</small></i>
<br>
<br>
</div>
<div class="moz-cite-prefix">On 10/21/2015 04:41 AM, David MacIver
wrote:<br>
</div>
<blockquote
cite="mid:CADZYRLfEmb4MRDvYPtK1DiffAoY0RemhQKG_Oj4XPQ-k31_dmg@mail.gmail.com"
type="cite">
<div dir="ltr">I've spent most of the last year arguing literally
the exact opposite of every single one of these points as part
of my work on Hypothesis (<a moz-do-not-send="true"
href="http://hypothesis.readthedocs.org/">http://hypothesis.readthedocs.org/</a>),
so obviously I have a few opinions here. ;-)
<div><br>
</div>
<div>
<div>> Fuzz tests are useful as tools for a developer to
run manually</div>
<div><br>
</div>
<div>Almost all successful security oriented fuzzing projects
have resulted in burning thousands of CPU hours or more
(Google's work on FFMPEG literally used a CPU-millenium -
two years continually running on 500 cores). A trivially
automatable system which is actively going out looking for
bugs in your code is not something you should be running
manually on an ad hoc basis, it should have associated
infrastructure.</div>
<div><br>
</div>
<div>> which help identify corner cases in the code not
covered by explicit-branch-based-unit-testing (feel free to
help me with the term I want here).</div>
<div><br>
</div>
<div>OK, I agree with this part. :-) My experience is that the
percentage of projects that don't find bugs they've
previously missed by adding Hypothesis or similar tooling to
them is so close to zero as to be a rounding error.</div>
<div><br>
</div>
<div>> They should not run by default in an automated
testing suite (for performance & time considerations).</div>
<div><br>
</div>
<div>As per above, regularly running fuzzing is a great way to
find new bugs, and it's very hard to do this in a way that
adapts well to changes in your code unless you are running
it as part of your CI.</div>
<div><br>
</div>
<div>Moreover, fuzz tests are <i>great</i> at finding
unanticipated bugs in your code, so not running them as part
of your automated testing is basically just asking for bugs
to creep in that you'll find at a later date instead.</div>
<div><br>
</div>
<div>I also don't think the premise of performance and time
considerations is really valid - most of the time you save a
lot of time by spending a little more time testing - but if
it is then you can tune it down until it's not. Fuzz testing
intrinsically comes with a "How long do you want me to spend
on this?" dial attached to it.</div>
<div><br>
</div>
<div>IMO, the optimal solution is to have a pre-built corpus
of test cases which you run as part of your automated
testing and then do a small quantity of additional fuzz
testing on top of that. Right now the tools for doing this
well in Hypothesis are quite manual unfortunately - making
it more automated is on my list of planned future work - but
corpus based tools like AFL you can basically do it out of
the box.</div>
<div><br>
</div>
<div>In the meantime, adding fuzzing to your normal automated
workflow is a) Something you can do with almost no initial
overhead in the workflows for separate testing
infrastructure and b) Results in your CI becoming an active
participant in the bug finding process. It's worth doing.</div>
<div><br>
</div>
<div>> They should not be used as an excuse for lazy
developers to not write explicit tests cases that
sufficiently cover their code.</div>
<div><br>
</div>
<div>I think this one is not only wrong, but harmfully wrong.
I utterly reject its premise. Treating hard work as having
some sort of intrinsic moral worth generally leads you down
paths of bad design and pointless time wasting and should be
resisted wherever possible. Work is valuable because it
achieves useful results, not because it is work.</div>
<div><br>
</div>
<div>
<div>The optimal solution is to get better results with less
work, and fuzz testing tools let you do that. This lets
you write higher quality software, either because you've
written more tests in the same amount of time or because
you've spend less time and got the same quality and
results and got to use the remaining time to focus on
other things. (cf <a moz-do-not-send="true"
href="http://www.drmaciver.com/2015/10/the-economics-of-software-correctness/">http://www.drmaciver.com/2015/10/the-economics-of-software-correctness/</a>)</div>
</div>
<div><br>
</div>
<div>In general, I think this entire thesis starts from a
premise of fuzz testing being something that is kinda useful
but you can just throw at your program every now and then to
see if it breaks. Instead I would like people to integrate
it as part of their normal testing work flow, and the
results I've seen so far from people doing so seem to
resoundly back this up. You get cleaner, more maintainable
and more comprehensive test suites as a result of it, and a
degree of software correctness that was nearly unattainable
previously becomes really quite accessible.</div>
<div><br>
</div>
<div>Regards,</div>
<div>David</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On 20 October 2015 at 23:03, Randy
Syring <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:randy@thesyrings.us" target="_blank">randy@thesyrings.us</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> I recently had a
chat with my team about fuzz testing. The thesis as
proposed is:<br>
<br>
<blockquote type="cite">Fuzz tests are useful as tools
for a developer to run manually which help identify
corner cases in the code not covered by
explicit-branch-based-unit-testing (feel free to
help me with the term I want here). They should not
run by default in an automated testing suite (for
performance & time considerations). They should
not be used as an excuse for lazy developers to not
write explicit tests cases that sufficiently cover
their code.</blockquote>
<br>
I'm in interested in feedback on the above. Agree,
disagree, and most importantly, why.<br>
<br>
Thanks.<br>
<div><br>
<b>Randy Syring</b><br>
<small>Husband | Father | Redeemed Sinner</small><br>
<br>
<i><small>"For what does it profit a man to gain the
whole world<br>
and forfeit his soul?" (Mark 8:36 ESV)</small></i>
<br>
<br>
</div>
</div>
<br>
_______________________________________________<br>
testing-in-python mailing list<br>
<a moz-do-not-send="true"
href="mailto:testing-in-python@lists.idyll.org">testing-in-python@lists.idyll.org</a><br>
<a moz-do-not-send="true"
href="http://lists.idyll.org/listinfo/testing-in-python"
rel="noreferrer" target="_blank">http://lists.idyll.org/listinfo/testing-in-python</a><br>
<br>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</blockquote>
<br>
</body>
</html>