[TIP] Nosejobs!

Jesse Noller jnoller at gmail.com
Fri Apr 10 11:12:47 PDT 2009

>>Facelift would identify possible nosejob servers, and if none are
>>available, would provision some from a pool (possibly, say, mosso or
>>EC2 servers) and bring them online. Facelift would then push the job.
> I'm confused, that does not sound asynchronous to me "bring them online" seems a very synchronous operation, as does tracking "availability". etc. It also sounds as if you are talking about an all-or-nothing kind of response, the worker-bee can chug away for days and maybe it is working or maybe it fell over, or something else bad happened.
> I think I get the jist of your interest, which is not to keep a long-lived network connection open, so perhaps I'm splitting (a)synchronous hairs. :)

Facelift provisions from pre-built images; those images have certain
resources already online, such as the daemon, python and anything else
I need and can pre-allocate on the OS. I've done this before with PXE
booting, and it worked quite well.

And I have done this before - several times in fact, so no - it's not
all or nothing, it's both polling (from the server to the workerbee)
and push (from the workerbee to the server).

I have no desire to spin cycles trying to maintain a constant
connection to maintain state - that way lies infinite amounts of fail.
I don't like fail.

>> Yes, I have looked at alternatives - my goal is to make
>>this as automatic as possible, support multiple languages for tests,
>>and *not* require that a test be written *for* the framework. A test
>>should be as simple as a shell script, or as complex as you want, it
>>should not have to know about the framework running it.
> That sounds all well and good until you get into the details:
> How does it report results? What format, exactly, will it use.
> Does it need to return a specific "exit code", or not. etc. etc. etc.
> What does it do if resources are not available?
> What happens if the test writer uses "print" or does some kind of "terminal input" or some kind of error dialogue pops (never had that happen with your python interpreter?)?

Again, I'm familiar with the details. You poll the daemon which has
the test runner in a sub-interpreter. The runner can report back, die
an unspeakable death, go out for tacos or something else. I also know
how to avoid dialog boxes for the most part, otherwise - it's trial
and error.

Clients report back (or can be polled) for results encapsulated in
JSON or YAML (since it's cross language and stupid simple). I'd use
protocol buffers if I thought they'd offer me something more. All
communication is secure (SSL) as this is intended to be a highly
distributed system.

If resources are unavailable, the client-server (nosejobs) reports
such, and it can be dealt with. If something goes out for tacos, we
deal with it. If a test uses print: I log it, bundle it with the rest
of the results (which is a combo JSON/Binary blog so I can include
charts, graphs, logs/etc in a zip bundle) and so on.

Results management is more than stdout, stderr and a report, so you
need to be able to deal with both. Again, territory I've already been
down and had to deal with.

> I'm all for simple tools, but this is not a simple problem, and the complexity can not be just
> "waived" away.

Yes it can. Ok, maybe not, but it's irrelevant as I am starting simple
(nosejobs) and will iterate on a simple design by adding what needs to
be added, when it needs to be added. I'm not going to spend 7 months
designing for anything and everything.

> In the Titusian sense there is always the ever present "human" who will notice when somthing
> has gone awry, so the question is one of balance.

Yup - the important thing is to have a constant and clear feedback
loop - log failures - ANY failures, log errors, log broken clients,
broken communications, report everything centrally, and constantly. A
problem like this shares many attributes with distributed systems -
you have to be able to withstand faults, transient issues, homogeny on
the network and so on.

Ultimately, humans, unless we replace them with cylons, have to be
able to grok what's going on.

> Also, critically, of documentation. One of my huge takeaways from #pycon was not just how much
> good can come from competitive groups (Jython, IronPython, etc.) but also how much FUD there
> is because projects don't do enough to explain their work and make it accessible.

I'm interested in the FUD you speak of, but I agree: I can't be
screwed trying to grok something with abysmal documentation, or
confusing and sadness inducing code.

> What I hear you saying is that it is too hard to figure out if some existing tools do what you want,
>so it is easier to just go off and invent your own.

No, what you hear me saying is "I've tried all of this other stuff,
and while I might adapt some of it, it doesn't fit my needs, it
doesn't scale, it's obtuse, broken, or non applicable". What I am
trying to say is "I have an itch, it's in a sensitive place and I'd
like to scratch it once and for all". I've built several systems (and
multiple iterations of each) like this, and I'm tired of doing it
again and again; ergo, I am going to do it, open source it, so I don't
have to do it again.

> I'm really looking forward to seeing what pony-build might be, so I hope Titus gets around to spec'ing his version of simple build pony stuff soon. :)
> Oh well, rambled off there at the end, will stop now. :)

Meh, Titus' idea is simple enough that a spec might just muddy it up
;) As it is, I threw the nosejobs out there as the starting point of
something larger I want to do. I don't have all the answers yet - but
I have a lot of ideas. I'm also not interested in debating semantics
until doomsday - I'm a fan of development-by-fiat.

Now, when I start pyBikeshed, we can all join in ;)


More information about the testing-in-python mailing list