[TIP] Pandokia - more info

Victoria G. Laidler laidler at stsci.edu
Wed Apr 8 13:42:24 PDT 2009


Hi folks,

As promised, I've put some more info about Pandokia, our test management 
& reporting system, on the web at http://stsdas.stsci.edu/pandokia/

In addition to a description of the system (also included in this 
message for ease of discussion), I've also posted the database schema, 
the test result file format, and the screen shots I showed at the TIP 
BoF. (These are static screenshots, I'm afraid -- our database is 
presently behind a firewall -- but they'll give you some flavor for it.)

Take a look, let us know what you think - Mark & I will do our best to 
respond to any questions, concerns, or suggestions that you have. We are 
planning to release this within a few months, but we have a couple of 
other releases to get out the door first. The source code is not yet on 
a publicly visible repository, but that's in the works as well.

Cheers,
Vicki
                              PANDOKIA
                               ========

                   'All your test are belong to us'


Philosophy
----------

Pandokia is designed as a lightweight test management and reporting system.
It consists of loosely coupled set of components that:
  - discover test files
  - configure the test environment
  - invoke test runners to run tests contained in test files
  - import test results to a database
  - identify missing test results
  - display browsable (CGI) test reports from the database

Any test runner, reporting tool, or analysis tool that complies with
the interface between components can participate in the system.

We assume two primary use cases for Pandokia:
  - nightly or continuous-integration batch runs of the whole system
  - developer-driven manual runs of parts of the system

Environment, configuration, and disabled tests are managed with
special files, to make it as easy as possible for the developer.


Test identifiers:
----------------

Tests are identified by the tuple (test_run, project, host,
test_name). "Project" is an arbitrary basket containing some
hierarchically ordered set of tests. Some test_runs can be given
special names (eg, daily_YYYY-MM-DD). "Host" presently refers to the
machine on which the tests were run, but in our use case, that is
essentially a proxy for "test execution environment", so we intend to
rename it.


Test status
-----------

Pandokia presently supports the following statuses:

 P = Test passed
 F = Test failed
 E = Test ended in error; did not complete
 D = Test disabled: test was deliberately disabled
 M = Test missing: test was expected to run, but no report was received

Tests can be disabled by developers. This can be useful for
chronically failing tests, and for tests that were written and
committed before the code to pass them was written, as in test-driven
development.

A table of expected tests (by identifying tuple) is kept in the
database, and updated when results from a new test is imported
(regardless of whether the test passed, failed, or errored). Any tests
that were expected but not received will be marked missing in the
database.

    Coming soon: Test statuses recognized by the reporter to be
    table-driven so that users can define custom statuses.


Test attributes
---------------

Pandokia can collect additional information through the population of
test attributes.

Test definition attributes (TDAs) can be used to record input parameters or
other information about the definition of a test such as reference values.

Test result attributes (TRAs) can be used to record more detailed results
than a simple pass/fail status, such as computed values and
discrepancies.

Test configuration attributes (TCAs, coming soon) can be used to record
values of environment variables, software versions for libraries that
were used, and so on.


The browsable report generator will optionally show these values for a
selected set of tests, and will pull out all identical values for a
given attribute into a table near the top of the report. This makes it
easy to quickly identify what failing tests may have in common.

    The test configuration attributes will probably be filled by
    either the test runner or the meta-runner. Special Environment and
    Dependency files placed in the directory with the tests will list
    the environment variables and software dependencies that should be
    included as TCAs. These files will contain default sections and
    optional named sections corresponding to the named execution
    environments.


Internal workflow:
-----------------

Test discovery is performed hierarchically in a directory tree. Each
directory may contain special files specifying the environment,*
dependencies,* contact information, or filename patterns; this
information is applied hierarchically, so that the configuration in a
parent directory applies to its children unless the children override
it.* A test named testfoo.py may be disabled by placing a file in the same
directory named testfoo.disable. The test discoverer will not pass
this file to any test runner.

     At present, only the existence of the file is checked, and the
     disabling applies to all test environments; we are considering
     extending this so that the file content would indicate the set of
     named environments in which the test should be disabled.

     Bookkeeping for disabled tests will be improved once the
     --collect functionality is available in nose.

The test meta-runner invokes the correct test runner within an
appropriately configured environment for each test file found
(locally, we use nose and a home-grown system). When processing a
directory tree, multiple test runners can be invoked concurrently, but
only one test runner at a time will be invoked per directory. For
concurrent runs, the various output files are gathered up into a
single file for import.

The importer processes a test result file and uses the information
in it to update the various database tables. The missing test
identifier then compares the tests found in a given run against the
set of expected tests, and inserts records for any missing tests with
a status of missing. If a test report is imported for a test
previously considered missing, the database will be updated accordingly.

The reporter provides a browsable interface to several reports,
organized by lists or by trees. The user can examine attributes for a
group of tests, compare results to a previous test run, or click
through to an individual test report.

*Planned but not yet implemented.


Interfaces:
----------

Any test runner that produces a Pandokia-compliant test result file
can be used with the test reporting system. (A nose plugin has been
written that produces such a file.)

Any reporter or analysis tool that understands the Pandokia database 
schema can
be used to query the database, which is presently implemented in SQLite.


Limitations:
-----------

This project is still in development!
Presently relies on UNIX-specific functionality for the discoverer and
metarunner.




Authors:
-------
Mark Sienkiewicz (STScI) and Vicki Laidler (CSC/STScI)
Science Software Branch, Space Telescope Science Institute






More information about the testing-in-python mailing list