[TIP] Result protocol
jnoller at gmail.com
Sat Apr 11 19:52:01 PDT 2009
On Sat, Apr 11, 2009 at 6:38 PM, Robert Collins
<robertc at robertcollins.net> wrote:
> On Sat, 2009-04-11 at 23:14 +0100, Michael Foord wrote:
>> Do you have any links to documentation. I can't see links on the
>> launchpad site and quick google search turned up nothing. (Well I
>> this - http://www.robertcollins.net/unittest/subunit/README )
> Old :(.
Launchpad does not make it simple to find things. *sad face*
>> What format is your text result protocol? It looks non-standard,
>> requiring a custom parser for any tool that wants to interoperate
>> it (something we are keen to avoid).
> Standards happen when people agree on a format :). I do appreciate the
> point that using (say) json means people don't need to write a low level
> parser - but they still need to add semantic interpretation to whatever
> low level vehicle was used.
Yes, no matter the output, the things above that in the stack need to
be able to derive something from it, which means walking the data/etc.
But why deal with yet another parser when there's plenty out there for
free? They're largely defensive against malformed data (or semi
tolerant) as well.
> I'm also happy to change subunit to address concerns [but not without
> some debate first, amongst others samba4 is using the current wire
> protocol to run their tests and it would be nice to not introduce a
> compatibility break]. If we agree that json really is better I could
> move the current protocol into a deprecated mode.
That's your call - I obviously like YAML (or JSON - but I don't
remember if JSON supports multi-line blobs). However, on a personal
note - I still won't be able to touch subunit, due to licensing.
> As for the format, it is a very simple line based format - because both
> human readable and standard don't exist :).
That's why we're here, right? :)
> That said, *outputting* it is totally trivial, and outputting is the
> common case: noone (I hope!) wants to write a test reporting framework
> purely in shell; but many people will have some tests best written in
And C, and C++, and Perl, and Java. I think if a code base lives long
enough, it ends up with tests in every language possible. Which
reminds me, I think we need a OCaml parser ;)
> And, you only need to write a parser at most once for each language -
> once you have the parser emitting events from it you can plugin that
> into whatever code you have in the language. I should in fact write a
> lex/yacc pair for it, which would make it even easier for people.
Still, let's try to pick something built on a standard syntax. If some
language lacks bindings, sure, we can do templating/parsing magic.
> A much larger problem, and what I think poses much more of an issue than
> deserialisation is that all the rich inspection of code people are used
> to doing locally on their machine - clocking, profiling, debugging,
> logfiles attached with the traceback, stderr output [when they are not
> already capturing that to a log file] - has absolutely no home in
> unittest today - nor does it have a home in TAP (because TAP is even
> less structured than subunit), nor can the unittest frameworks in C/C++
> that I have seen handle these things.
This is why I pretty much chucked the idea of TAP early on, and
keeping within the unit test rules. I can use a unit test runner to
provide some of the data, but as you can see in the other emails - I
want to have a much more detailed report. Additionally, the executor
component would save off arbitrary test output, which would be linked
to in that report.
> The thing to spend time on is how to get this extra information
> reasonably across to the TestResult such that a wire protocol can even
> conceptually handle it.
I think I'm trying to solve something else, but which touches on some
of what you're talking about. I know that the system I want to build
has some of the attributes of subunit, and so I would use that as a
bit of a reference.
In my case, I don't really want to make everything look like a unit
test test case - I just want something that collects, runs and reports
on tests of any ilk, so long as those tests obey some basic rules, not
to far from what you've done, but again, I don't want to stick to icky
unit testing rules :)
More information about the testing-in-python