[TIP] Nosejobs!

Doug Philips dgou at mac.com
Fri Apr 10 10:37:46 PDT 2009


On or about Thursday, April 09, 2009, at 02:51PM, Jesse Noller indited:
>I was there :)

Cool!

>What py.test is doing is significantly different then what I am trying
>to do. In my case, it has to be fully asynchronous, distributed
>amongst multiple physical sites, and so on.

>Facelift would identify possible nosejob servers, and if none are
>available, would provision some from a pool (possibly, say, mosso or
>EC2 servers) and bring them online. Facelift would then push the job.

I'm confused, that does not sound asynchronous to me "bring them online" seems a very synchronous operation, as does tracking "availability". etc. It also sounds as if you are talking about an all-or-nothing kind of response, the worker-bee can chug away for days and maybe it is working or maybe it fell over, or something else bad happened.

I think I get the jist of your interest, which is not to keep a long-lived network connection open, so perhaps I'm splitting (a)synchronous hairs. :)

> Yes, I have looked at alternatives - my goal is to make
>this as automatic as possible, support multiple languages for tests,
>and *not* require that a test be written *for* the framework. A test
>should be as simple as a shell script, or as complex as you want, it
>should not have to know about the framework running it.

That sounds all well and good until you get into the details:
How does it report results? What format, exactly, will it use.
Does it need to return a specific "exit code", or not. etc. etc. etc.
What does it do if resources are not available?
What happens if the test writer uses "print" or does some kind of "terminal input" or some kind of error dialogue pops (never had that happen with your python interpreter?)?

I'm all for simple tools, but this is not a simple problem, and the complexity can not be just "waived" away.
In the Titusian sense there is always the ever present "human" who will notice when somthing has gone awry, so the question is one of balance.

Also, critically, of documentation. One of my huge takeaways from #pycon was not just how much good can come from competitive groups (Jython, IronPython, etc.) but also how much FUD there is because projects don't do enough to explain their work and make it accessible. 
What I hear you saying is that it is too hard to figure out if some existing tools do what you want, so it is easier to just go off and invent your own.
I'm really looking forward to seeing what pony-build might be, so I hope Titus gets around to spec'ing his version of simple build pony stuff soon. :)

Oh well, rambled off there at the end, will stop now. :)

-Doug


-Doug




More information about the testing-in-python mailing list