[TIP] test definitions

Jesse Noller jnoller at gmail.com
Wed Sep 17 12:35:17 PDT 2008


On Wed, Sep 17, 2008 at 2:56 PM, C. Titus Brown <ctb at msu.edu> wrote:
> On Wed, Sep 17, 2008 at 02:38:34PM -0400, Jesse Noller wrote:
> -> On Sep 17, 2008, at 2:01 PM, Grig Gheorghiu <grig at gheorghiu.net> wrote:
> ->
> -> >--- On Wed, 9/17/08, C. Titus Brown <ctb at msu.edu> wrote:
> -> >
> -> >>It's really all about expectations, IMO:
> -> >>
> -> >>Unit and functional tests tell me when the code fails my
> -> >>expectations.
> -> >>
> -> >>Regression tests tell me when something unexpected happens.
> -> >>
> -> >>Acceptance tests tell me when the code fails my
> -> >>customer's expectations.
> -> >
> -> >Couldn't have said it better :-) I sense a blog post there, Titus :-)
> ->
> -> Sure, if he wants to start a holy war - the definitions he's using are
> -> fairly developer centric, rather than testing group specific
>
> No *true* tester would disagree with my definitions! [0]
>
> Seriously, is there a reason you think that my definitions are poor,
> misleading, or how developer-centrism is causing me to miss something?
> Enquiring minds want to know...
>
> --titus
>
> [0] http://en.wikipedia.org/wiki/No_true_Scotsman
> --
> C. Titus Brown, ctb at msu.edu
>

I changed the title just to reflect the topic :)

My response was partially tongue-in-cheek, I'm actually working on a
post  related to this very subject. Most of the places I have worked
at not "fully agile" -  in most cases, not even close. Generally
speaking, you have the Dev group and the test engineering group, and
while they are working closely together, test engineering has a
outside-in approach, and Dev has the inside-out approach.

Here's a straw man, based on a semi-agile world, where not everyone is
writing in the same language:

 Unit tests: These are tests focused on <b>developer and
maintainer</b> productivity. These are "close to the code" tests that
run in mostly simulated environments. Unit tests are a cornerstone of
Agile methodology - generally speaking, you make these before your
code.

Smoke/Simulation Test: These are the "next layer up" - they use
partial systems (e.g. your code + the guy's next to you module) to run
more integration-style testing. Smokes are normally run on every
compilation of the product along with unit tests. They do not require
a fully deployed, functioning system - only a small group of parts.
Again, these tests are "close to the code".

Acceptance Test: These normally comprise a large number of your tests
in an organization. Acceptance tests prove that the specific
component/feature is sane in the context of the fully deployed product
- you might require these to be fully developed, executed and passing
before a specific component or feature is merged to trunk. Acceptance
tests prove that the feature/component works as intended (not
programmed). They should be short in execution time.

Functional Tests: Functional tests are "larger" and should test as
much of the functionality of the feature/component as possible, they
should also test with an eye towards other parts of the product and
system (e.g. integration). Functional tests should be as expansive and
detailed as possible. These can also be called Regression tests.

Stress/Scalability Tests: This should be self-evident. Stress tests
build on functional areas to push the product to it's limits - how
many files can it hold, how many connections can it withstand, etc.

Performance Tests: Characterization of key performance stats:
Objects/second records parsed/sec, and so on.

With that out of they way, I know this is a debate of semantics. In
many organizations, the test-engineering team owns everything above
the Smoke/Simulation tests - simply due to a different set of skills.

Titus' list mentioned black-box testing in regards to regression tests
- how black is that box though? In the example I outlined above, the
acceptance tests can be considered the most "black box" of all of
those tests, functional tests have the ability/need to potentially
reach into system internals to evoke certain behaviors (e.g: corrupt
internal state) and therefore normally have a hook/API into the
product to evoke that behavior without being overly "close to the
code".

Test automation is critical in all of these buckets (ergo the APIs to
"reach into" places) - in fact, there's rarely cases where you can't
write some sort of helper-script unless it involves things like
ripping SATA cables out of a box :)

There's the strawman!

-jesse



More information about the testing-in-python mailing list