[TIP] Code analysis tools and Hudson

Mark Roddy markroddy at gmail.com
Wed Apr 28 07:21:12 PDT 2010


On Wed, Apr 28, 2010 at 6:12 AM, Nicolas Trangez
<Nicolas.Trangez at sun.com> wrote:
> On Wed, 2010-04-28 at 12:04 +0200, Arve Knudsen wrote:
>> On Wed, Apr 28, 2010 at 11:19 AM, Nicolas Trangez
>> <Nicolas.Trangez at sun.com> wrote:
>>
>>         On Wed, 2010-04-28 at 11:13 +0200, Arve Knudsen wrote:
>>         > Hi
>>         >
>>         >
>>         > Does anyone have any examples of integrating code analysis
>>         of Python
>>         > code with Hudson? We'd like to supplement our Python
>>         unittesting with
>>         > static analysis, i.e. linting. It'd also be interesting to
>>         hear about
>>         > which tools are popular for this purpose :)
>>
>>
>>         I've been using the Hudson 'Violations' plugin with great
>>         success in
>>         several projects, using PyLint for static code checking.
>>
>>         PyLint execution was performed in a custom step of the Hudson
>>         job.
>>
>>         More info at [1] and [2].
>>
>>
>> Thank you very much :) Do you have any opinion on the usefulness of
>> e.g. PyChecker versus PyLint?
>
> I don't really use PyChecker. I do use PyFlakes integrated in my Vim
> setup, since it's faster than PyLint, but in my experience PyLint,
> although being slower than other code checkers, finds most mistakes (and
> less false-positives), which is why I prefer it when running
> 'offline' (e.g. in a CI setup).
>
>>
>>         We use Nose's XUnit output as well (for which Hudson has
>>         built-in
>>         support).
>>
>>
>> We also use Nose's xUnit, but I'm wondering how compatible it is with
>> Hudson, since on the project status page Hudson reports "no tests" for
>> the Latest Test Result. All tests were successful, isn't it supposed
>> to say "no failures" (or something along those lines)?
>
> Not sure, works fine here... Just make sure you point to the correct
> location of the output XML file(s).
>
> Nicolas
>
>
>
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
>

I can second that.  I occasionally run pychecker manually as I find
it's false positive rate to be too high to be worth while to run as
part of our CI (on the code base I work on at least).

-Mark



More information about the testing-in-python mailing list