From skip.montanaro at gmail.com Thu Apr 2 09:32:17 2020 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Thu, 2 Apr 2020 09:32:17 -0700 Subject: [TIP] Test added (and executed) isn't increasing apparent coverage Message-ID: I'm using Ned Batchelder's coverage package to investigate spots in my code which aren't being exercised. Looking at the ,cover file for one module, I saw: > def encode_oparg(tup): > "smash tuple back into oparg int" ! oparg = 0 ! for elt in tup: ! oparg = oparg << 8 | elt ! return oparg Recognizing that my code doesn't normally use that function, I added an explicit example to my unit tests: from rattlesnake import instructions, opcodes, util ... def test_util_encode(self): self.assertEqual(util.encode_oparg((1, 24, 2)), 71682) self.assertEqual(util.encode_oparg(()), 0) (Actually, the first assert was already there. I only added the second.) That changed nothing, which surprised me. I've got a simple Makefile to drive the process: COVERAGE = $(HOME)/.local/bin/coverage RATSRC = $(PWD)/Lib/rattlesnake all : FORCE $(COVERAGE) erase $(COVERAGE) run -a --source=$(RATSRC) $(HOME)/tmp/junk.py $(COVERAGE) run -a --source=$(RATSRC) ./Tools/scripts/run_tests.py -v test_rattlesnake $(COVERAGE) annotate $(COVERAGE) report FORCE : When I run make, this section of output corresponds to running the unit tests: /home/skip/.local/bin/coverage run -a --source=/home/skip/src/python/cpython/Lib/rattlesnake ./Tools/scripts/run_tests.py -v test_rattlesnake /home/skip/src/python/cpython/python -u -W default -bb -E -m test -r -w -j 0 -u all,-largefile,-audio,-gui -v test_rattlesnake == CPython 3.9.0a5+ (heads/register:25d05303a7, Apr 2 2020, 06:27:03) [GCC 9.2.1 20191008] == Linux-5.3.0-42-generic-x86_64-with-glibc2.30 little-endian == cwd: /home/skip/src/python/cpython/build/test_python_14692 == CPU count: 8 == encodings: locale=UTF-8, FS=utf-8 Using random seed 7332863 0:00:00 load avg: 5.25 Run tests in parallel using 10 child processes 0:00:00 load avg: 5.25 [1/1] test_rattlesnake passed test_long_block_function (test.test_rattlesnake.InstructionTest) ... ok test_nop (test.test_rattlesnake.InstructionTest) ... ok test_simple_branch_function (test.test_rattlesnake.InstructionTest) ... ok test_src_dst (test.test_rattlesnake.InstructionTest) ... ok test_trivial_function (test.test_rattlesnake.InstructionTest) ... ok test_util_LineNumberDict (test.test_rattlesnake.InstructionTest) ... ok test_util_decode (test.test_rattlesnake.InstructionTest) ... ok test_util_encode (test.test_rattlesnake.InstructionTest) ... ok Here's the output of the report command: Name Stmts Miss Cover ----------------------------------------------------- Lib/rattlesnake/__init__.py 0 0 100% Lib/rattlesnake/blocks.py 63 4 94% Lib/rattlesnake/converter.py 271 11 96% Lib/rattlesnake/decorators.py 42 42 0% Lib/rattlesnake/instructions.py 145 3 98% Lib/rattlesnake/opcodes.py 163 0 100% Lib/rattlesnake/util.py 38 7 82% ----------------------------------------------------- TOTAL 722 67 91% OTOH, if I extract the calls to decode_oparg and encode_oparg into my junk.py script: print(decode_oparg(0) == (0,)) print(decode_oparg(71682, False) == (0, 1, 24, 2)) print(encode_oparg((1, 24, 2)) == 71682) print(encode_oparg(()) == 0) reported coverage of .../util.py goes up significantly: Name Stmts Miss Cover ----------------------------------------------------- Lib/rattlesnake/__init__.py 0 0 100% Lib/rattlesnake/blocks.py 63 4 94% Lib/rattlesnake/converter.py 271 11 96% Lib/rattlesnake/decorators.py 42 42 0% Lib/rattlesnake/instructions.py 145 3 98% Lib/rattlesnake/opcodes.py 163 0 100% Lib/rattlesnake/util.py 38 2 95% ----------------------------------------------------- TOTAL 722 62 91% Note the execution of test_util_decode and test_util_encode. I think the unit test run should contribute to the overall coverage, but at least some of it appears not to for some reason. Is there something I'm doing wrong in setting up or executing my coverage commands? Thx, Skip Montanaro -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at gedmin.as Fri Apr 3 02:59:57 2020 From: marius at gedmin.as (Marius Gedminas) Date: Fri, 3 Apr 2020 12:59:57 +0300 Subject: [TIP] Test added (and executed) isn't increasing apparent coverage In-Reply-To: References: Message-ID: <20200403095957.3kuigqcusz2iqoi7@blynas> On Thu, Apr 02, 2020 at 09:32:17AM -0700, Skip Montanaro wrote: > When I run make, this section of output corresponds to running the unit tests: > > /home/skip/.local/bin/coverage run -a --source=/home/skip/src/python/ > cpython/Lib/rattlesnake ./Tools/scripts/run_tests.py -v test_rattlesnake > /home/skip/src/python/cpython/python -u -W default -bb -E -m test -r -w -j > 0 -u all,-largefile,-audio,-gui -v test_rattlesnake Is your run_tests.py executing 'python ... -m test ...' in a subprocess? Have you done the necessary groundwork to enable multi-process coverage tracking? I think not -- I don't see '-p' passed to 'coverage run', nor a 'coverage combine' command in your Makefile. Do check the documentation: - https://coverage.readthedocs.io/en/latest/subprocess.html (which page surprises me by not mentioning 'coverage combine', TBH) - https://coverage.readthedocs.io/en/latest/cmd.html#combining-data-files HTH, Marius Gedminas -- If all else fails, read the documentation. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: not available URL: From skip.montanaro at gmail.com Sat Apr 4 07:34:16 2020 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Sat, 4 Apr 2020 07:34:16 -0700 Subject: [TIP] Test added (and executed) isn't increasing apparent coverage In-Reply-To: <20200403095957.3kuigqcusz2iqoi7@blynas> References: <20200403095957.3kuigqcusz2iqoi7@blynas> Message-ID: > > Is your run_tests.py executing 'python ... -m test ...' in a subprocess? > > Have you done the necessary groundwork to enable multi-process coverage > tracking? I think not -- I don't see '-p' passed to 'coverage run', nor > a 'coverage combine' command in your Makefile. > > Do check the documentation: > - https://coverage.readthedocs.io/en/latest/subprocess.html (which > page surprises me by not mentioning 'coverage combine', TBH) > - https://coverage.readthedocs.io/en/latest/cmd.html#combining-data-files Thanks for your help, Marius. I'm still stuck, however. I read the docs, replaced -a with -p and created a sitecustomize.py file in my checkout's Lib directory. Despite exiting with a zero status, this command: ~/.local/bin/coverage run -p --source=${PWD}/Lib/rattlesnake ./Tools/scripts/run_tests.py test_rattlesnake produces no .coverage.HOSTNAME.PID.RANDOM file. This however does: ~/.local/bin/coverage run -p --source=${PWD}/Lib/rattlesnake $(HOME)/tmp/junk.py The lack of any coverage data from running the unit test would seem to be at the root of my problem. Here's my sitecustomize.py file: import coverage print(">> coverage.process_startup()") coverage.process_startup() and when executed, you can see the print() call produces the expected output. Alas, while every Python process clearly executed coverage.process_startup() and the overall test run exited with zero status, no .coverage.* files were produced. % /home/skip/.local/bin/coverage run -p --source=/home/skip/src/python/cpython/Lib/rattlesnake ./Tools/scripts/run_tests.py test_rattlesnake >> coverage.process_startup() /home/skip/src/python/cpython/python -u -W default -bb -E -m test -r -w -j 0 -u all,-largefile,-audio,-gui test_rattlesnake >> coverage.process_startup() Using random seed 3070170 0:00:00 load avg: 3.15 Run tests in parallel using 10 child processes 0:00:00 load avg: 3.15 [1/1] test_rattlesnake passed >> coverage.process_startup() == Tests result: SUCCESS == 1 test OK. Total duration: 462 ms Tests result: SUCCESS (python38) cpython% echo $? 0 (python38) cpython% find . -name '.coverage*' | wc -l 0 Explicitly executing the Python process which runs the test_rattlesnake unit test produces no output despite successful exit status: (python38) cpython% /home/skip/src/python/cpython/python -u -W default -bb -E -m test -r -w -j 0 -u all,-largefile,-audio,-gui test_rattlesnake >> coverage.process_startup() Using random seed 9943159 0:00:00 load avg: 3.92 Run tests in parallel using 10 child processes 0:00:00 load avg: 3.92 [1/1] test_rattlesnake passed >> coverage.process_startup() == Tests result: SUCCESS == 1 test OK. Total duration: 467 ms Tests result: SUCCESS (python38) cpython% echo $? 0 (python38) cpython% find . -name '.coverage*' | wc -l 0 I added --debug=trace to the coverage run command line. When processing the junk.py source file it lists the Lib/rattlesnake files I want traced, but not when processing the test_rattlesnake unit test: (python38) cpython% make -f Makefile.coverage 2>&1 | egrep 'coverage run|Tracing' /home/skip/.local/bin/coverage run -p --debug=trace --source=/home/skip/src/python/cpython/Lib/rattlesnake /home/skip/tmp/junk.py Tracing '/home/skip/src/python/cpython/Lib/rattlesnake/__init__.py' Tracing '/home/skip/src/python/cpython/Lib/rattlesnake/converter.py' Tracing '/home/skip/src/python/cpython/Lib/rattlesnake/opcodes.py' Tracing '/home/skip/src/python/cpython/Lib/rattlesnake/blocks.py' Tracing '/home/skip/src/python/cpython/Lib/rattlesnake/instructions.py' Tracing '/home/skip/src/python/cpython/Lib/rattlesnake/util.py' /home/skip/.local/bin/coverage run -p --debug=trace --source=/home/skip/src/python/cpython/Lib/rattlesnake ./Tools/scripts/run_tests.py test_rattlesnake It does tell me about a bunch of other files it won't be tracing, but doesn't mention my Lib/rattlesnake files at all. If my reading of the documentation is correct, I am now doing what I need to do, but for whatever reason coverage is ignoring the files I explicitly want traced, so I get very incomplete coverage results. Any further insight you have would be appreciated. Skip -------------- next part -------------- An HTML attachment was scrubbed... URL: From ned at nedbatchelder.com Sat Apr 4 14:31:45 2020 From: ned at nedbatchelder.com (Ned Batchelder) Date: Sat, 4 Apr 2020 17:31:45 -0400 Subject: [TIP] [coverage.py] concept question about branch coverage, statement coverage, and percent coverage In-Reply-To: References: Message-ID: <45aa65fe-6d34-434d-f992-ae0c251bdb78@nedbatchelder.com> On 3/31/20 8:19 PM, Enji Cooper wrote: > Hi, > I have a general question about a few of the tests that come packaged with coverage.py, related to ?line coverage?, ?statement coverage?, and ?branch coverage?. > The code snippet from BasicCoverageTest.test_simple is as follows: > > ???\ > a = 1 > b = 2 > > c = 4 > # Nothing here > d = 6 > ??? > Per the above example, lines 1, 2, 4, and 6 are executed, but nothing else. The line coverage is 4/4 => 100% and the number of statements executed is also 4/4, resulting in 100% line/statement coverage. There aren?t any branches in the above code, so I would put down something like ?N/A?. > The asserted report (abbreviated) is as follows: > > # The fields in `report` translate to "Statements, Miss, Branch, Partial Branches, Cover" > lines=[1,2,4,6], report="4 0 0 0 100% > > The test asserts that lines {1, 2, 4, 6} are executed, which is a total of 4/4 lines/statements executed, no statements missed, no branches taken, no partial branches, resulting in 100% line/statement coverage. Seems legit (the expected results match my understanding). > If my above understanding is correct, let?s continue on to the next example. > The code snippet from CompoundStatementTest.test_elif is as follows: > > """\ > a = 1; b = 2; c = 3; > if a == 1: > x = 3 > elif b == 2: > y = 5 > else: > z = 7 > assert x == 3 > ??? > > In this case, there are 8 total lines, 11 statements (line 1 contains 3 statements) and 4 possible branches (`if a == 1`, `elif b == 2`, `else:` and `assert x == 3`). Since a=1, lines 1-3 and 8 are executed, but none of the other lines are executed. 6 of the 11 statements are executed, and only 2 of the branch statements (the first being `if a == 1`, the second being `assert x == 3`) are executed. If my understand was correct, this should result in 50% line coverage, 54.55% statement coverage, and 50% branch coverage. This however, isn?t what the test expects: > > # The fields in `report` translate to "Statements, Miss, Branch, Partial Branches, Cover, Missing" > lines=[1,2,3,4,5,7,8], missing="4-7", report="7 3 4 1 45% 2->4, 4-7? > > I agree on the `lines` and `missing` value, but the rest of the values seem askew: > > * There are 10 statements, not 7 statements. > * There are 4 missed statements/lines, not 3 statements/lines. > * There are 4 total branches (I agree on that part at least). > * There is 1 partial branch taken (err? this seems odd ? which branch is that?). > * The overall statement/line coverage is assumed to be 45% (how is that calculated?). > * The missing lines are 2->4 (a jump statement), and 4-7. > > Could someone please explain why these expected values are as noted above? There are only 7 statements, not 10, for two reasons: first, "else:" doesn't result in any compiled code. Execution never jumps to "else", it jumps directly to the statement after else.? Second, Python won't report on multiple statements in a line, so "a = 1; b = 2; c = 3;" is counted as one statement. There are three missed statements because "else" is not a statement. The partial branch is "if a == 1", because it was only ever true, never false. The coverage.py FAQ (https://coverage.readthedocs.io/en/latest/faq.html#q-how-is-the-total-percentage-calculated) explains how the percentage is calculated: (statements - missed_statements + branch_exits - missed_branch_exits) / (statements + branch_exits) In this case, that works out to (7 - 3 + 4 - 3) / (7 + 4) = 45% As the FAQ notes, the coverage.py reports don't show the number of branch exits and missed branch exits.? They show the number of branches and number of total branches, so it's a bit hard to reconstruct the calculation from the reports. --Ned. > Thank you, > -Enji > > 1. https://www.cs.odu.edu/~cs252/Book/stmtcov.html > 2. https://www.cs.odu.edu/~cs252/Book/branchcov.html > 3. https://www.froglogic.com/coco/statement-coverage/ > _______________________________________________ > testing-in-python mailing list > testing-in-python at lists.idyll.org > http://lists.idyll.org/listinfo/testing-in-python From ned at nedbatchelder.com Sun Apr 5 16:00:53 2020 From: ned at nedbatchelder.com (Ned Batchelder) Date: Sun, 5 Apr 2020 19:00:53 -0400 Subject: [TIP] Test added (and executed) isn't increasing apparent coverage In-Reply-To: References: <20200403095957.3kuigqcusz2iqoi7@blynas> Message-ID: On 4/4/20 10:34 AM, Skip Montanaro wrote: > > Is your run_tests.py executing 'python ... -m test ...' in a > subprocess? > > Have you done the necessary groundwork to enable multi-process > coverage > tracking?? I think not -- I don't see '-p' passed to 'coverage > run', nor > a 'coverage combine' command in your Makefile. > > Do check the documentation: > - https://coverage.readthedocs.io/en/latest/subprocess.html (which > ? page surprises me by not mentioning 'coverage combine', TBH) > - > https://coverage.readthedocs.io/en/latest/cmd.html#combining-data-files > > > Thanks for your help, Marius. I'm still stuck, however. I read the > docs, replaced -a with -p and created a sitecustomize.py file in my > checkout's Lib directory. Despite exiting with a zero status, this > command: > > ~/.local/bin/coverage run -p --source=${PWD}/Lib/rattlesnake > ./Tools/scripts/run_tests.py test_rattlesnake > > > produces no .coverage.HOSTNAME.PID.RANDOM file. This however does: > > ~/.local/bin/coverage run -p --source=${PWD}/Lib/rattlesnake > $(HOME)/tmp/junk.py > > > The lack of any coverage data from running the unit test would seem to > be at the root of my problem. You need to also define the environment variable COVERAGE_PROCESS_START to point to the .coveragerc file to use. You haven't mentioned it, so perhaps this is what is missing? --Ned. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skip.montanaro at gmail.com Mon Apr 6 06:33:53 2020 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Mon, 6 Apr 2020 06:33:53 -0700 Subject: [TIP] Test added (and executed) isn't increasing apparent coverage In-Reply-To: References: <20200403095957.3kuigqcusz2iqoi7@blynas> Message-ID: > You need to also define the environment variable COVERAGE_PROCESS_START to point to the .coveragerc file to use. You haven't mentioned it, so perhaps this is what is missing? I wasn't aware a .coveragerc file was even required. I found no tutorial so created a simple one: [run] branch = False source = /home/skip/src/python/cpython/Lib/rattlesnake [report] then ran my make command. Still, only the first coverage command produced debug output indicating my Lib/rattlesnake package would be covered: % COVERAGE_PROCESS_START=/home/skip/src/python/cpython/.coveragerc /home/skip/.local/bin/coverage run --debug=trace -p /home/skip/tmp/junk.py 2>&1 | grep rattlesnake Tracing '/home/skip/src/python/cpython/Lib/rattlesnake/__init__.py' Tracing '/home/skip/src/python/cpython/Lib/rattlesnake/converter.py' Tracing '/home/skip/src/python/cpython/Lib/rattlesnake/opcodes.py' Tracing '/home/skip/src/python/cpython/Lib/rattlesnake/blocks.py' Tracing '/home/skip/src/python/cpython/Lib/rattlesnake/instructions.py' Tracing '/home/skip/src/python/cpython/Lib/rattlesnake/util.py' When I ran coverage over my unit tests it once again failed to even mention the Lib/rattlesnake files: % COVERAGE_PROCESS_START=/home/skip/src/python/cpython/.coveragerc /home/skip/.local/bin/coverage run --debug=trace -p ./Tools/scripts/run_tests.py test_rattlesnake 2>&1 | grep rattlesnake 0:00:03 load avg: 3.60 [1/1] test_rattlesnake passed I finally figured out that perhaps I couldn't use the command line at all. Getting rid of the -p command line flag and setting "parallel = True" in the config file caused things to start working. I don't know if I'm a dolt or if this 100% reliance on config files when the program being covered spawns subprocesses is something subtle, but I certainly missed the note on this page about it: https://coverage.readthedocs.io/en/coverage-5.0.4/subprocess.html I tend to read the text. My brain views everything else as a sidebar (even stuff in sky blue boxes which say "note"). Maybe the note should be the very first thing in that section? Maybe there's a "warning" directive which would make the note stand out more? Not complaining that what's there is wrong, just that my brain filtered it out. A bit more digging... During this process I stumbled on a section in the Python dev guide about code coverage: https://devguide.python.org/coverage/#measuring-coverage There's no mention of parallel this-n-that, though I notice it has the user running Lib/test/regrtest.py directly (the "old" way) instead of using Tools/scripts/run_tests.py. The latter execs Python as its last order of business, thus losing the parallel setup of coverage.py. So, now I understand what's going on. Thanks for your help. Coverage.py is a great tool. Skip From ernst at pleiszenburg.de Tue Apr 14 08:05:43 2020 From: ernst at pleiszenburg.de (Sebastian M. Ernst) Date: Tue, 14 Apr 2020 17:05:43 +0200 Subject: [TIP] coverage.py vs QGIS Message-ID: <3655df0b-ffc8-52e6-7a99-8766a2941fe6@pleiszenburg.de> Hi all, I am trying to use `coverage.py` inside QGIS (1, 2). QGIS is a large hybrid C++/Python Qt-based application. Long story short: I am running into what appear to be nondeterministic race conditions plus a bunch of other issues. After reading the manual, I am guessing that with respect to QGIS' threads and Qt's event loops coverage.py is not meant to be used in this context ;) Question: Does anybody around here have any experience at all with coverage.py and QGIS or just coverage.py inside larger (C++/Qt/PyQt/hybrid) applications? Best regards, Sebastian 1: https://www.qgis.org 2: https://github.com/qgis/QGIS From skip.montanaro at gmail.com Tue Apr 14 12:01:29 2020 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Tue, 14 Apr 2020 12:01:29 -0700 Subject: [TIP] coverage.py vs QGIS In-Reply-To: <3655df0b-ffc8-52e6-7a99-8766a2941fe6@pleiszenburg.de> References: <3655df0b-ffc8-52e6-7a99-8766a2941fe6@pleiszenburg.de> Message-ID: (First and foremost, I think you need to explain more about what you've tried so far. Beyond that, as I've just gone through issues using coverage.py in a multi-process setting, let me refer you to my recent message and the thread it spawned: http://lists.idyll.org/pipermail/testing-in-python/2020-April/007353.html It seems you have a somewhat more complicated setup than I did, but I will point you to this section of the docs: https://coverage.readthedocs.io/en/coverage-5.1/subprocess.html I advise you to read it carefully. I missed some things in my first go-round, in particular this: Measuring coverage in sub-processes is a little tricky. When you spawn a sub-process, you are invoking Python to run your program. Usually, to get coverage measurement, you have to use coverage.py to run your program. Your sub-process *won?t be using coverage.py*, so we have to convince Python to use coverage.py even when not explicitly invoked. (emphasis mine) The documentation section on command line execution: https://coverage.readthedocs.io/en/coverage-5.1/cmd.html#execution has this to say about complex multi-threaded/process/event situations: Coverage.py can measure multi-threaded programs by default. If you are using more exotic concurrency, with the multiprocessing, greenlet, eventlet, or gevent libraries, then coverage.py will get very confused. Use the --concurrency switch to properly measure programs using these libraries. Give it a value of multiprocessing, thread, greenlet, eventlet, or gevent. Values other than thread require the C extension. I imagine it might be impossible to achieve perfection, but perhaps with overlapping runs using different values for the --concurrency flag you might get acceptable output. Skip Montanaro On Tue, Apr 14, 2020 at 8:11 AM Sebastian M. Ernst wrote: > Hi all, > > I am trying to use `coverage.py` inside QGIS (1, 2). QGIS is a large > hybrid C++/Python Qt-based application. > > Long story short: I am running into what appear to be nondeterministic > race conditions plus a bunch of other issues. After reading the manual, > I am guessing that with respect to QGIS' threads and Qt's event loops > coverage.py is not meant to be used in this context ;) > > Question: Does anybody around here have any experience at all with > coverage.py and QGIS or just coverage.py inside larger > (C++/Qt/PyQt/hybrid) applications? > > Best regards, > Sebastian > > > 1: https://www.qgis.org > 2: https://github.com/qgis/QGIS > > _______________________________________________ > testing-in-python mailing list > testing-in-python at lists.idyll.org > http://lists.idyll.org/listinfo/testing-in-python > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ernst at pleiszenburg.de Wed Apr 15 02:58:05 2020 From: ernst at pleiszenburg.de (Sebastian M. Ernst) Date: Wed, 15 Apr 2020 11:58:05 +0200 Subject: [TIP] coverage.py vs QGIS In-Reply-To: References: <3655df0b-ffc8-52e6-7a99-8766a2941fe6@pleiszenburg.de> Message-ID: <9d32e808-480c-ba65-6d75-feb7c6bf787e@pleiszenburg.de> Hi Skip, thanks for your reply. > (First and foremost, I think you need to explain more about what you've > tried so far. it's a really complex topic, so before wasting too much time on describing details, I was simply trying to ask for any experience (at all). Anyway, here are a few more "details" :) QGIS is a C++/Qt5 app at the moment. At some point in the past, Python scripting capability, a Python console inside the app and a Python plugin system were added, see (1). The QGIS-API-to-Python bridge is based on SIP (2). Python itself is linked into the app through `libpython`, see (3). Therefore there is no Python command that I can substitute with coverage. I am primarily interested in stuff that happens in `utils.py` (4), the `pyplugin_installer` (5) and, last but not least, individual (Python) plugins. For starters, I have tried to add `coverage.py` to a test plugin. QGIS plugins are sort of Python modules which are imported on demand at QGIS runtime. Their basic structure is documented here (6). A simple template can be found here (7) for instance. Plugin loading happens through (a slightly modified version of) `builtins.__import__` (not `importlib`), see here (8) and here (9). My assumption was that I could add `coverage.py` to a plugin's `__init__.py` file so that it would be started when the plugin is imported. I tried something like this (directly within `__init__.py`'s namespace): ```python import os import atexit from coverage import Coverage _cov = Coverage(source = os.path.dirname(__file__)) _cov.start() atexit.register(_cov.stop) ``` The use of `atexit` did not work at all, so I experimented with triggering the stop method manually (through QGIS' Python console). This resulted in assertion errors coming from `coverage.py`, see (10). Looking for alternatives, I knew that QGIS does not emit a termination signal or similar to which I could attach the stop function. So I tried to call the stop function from within the plugin's `unload` method inside the plugin's class. `unload` is triggered for every plugin when QGIS wants to unload it (prior to quitting for instance). This resulted in the same assertion error. I assumed that `coverage.py` being stared in one file/module and stopped in another (the plugin class does not reside within the `__init__.py` file) may be the problem, so I moved the start of `coverage.py` into the plugin class constructor. This resulted in either QGIS crashing right at the start or no coverage data at all as if almost no line was hit. As a last line of defense, I moved the `coverage.py` start code from the plugin class constructor to the plugin class `initGui` method. `initGui` is the counterpart to `unload`. It is called by QGIS after it has imported the plugin. Anyway, having moved the start code to `initGui` sort of helped. QGIS did not crash anymore and I saw a few more lines covered (still not every line that definitely run). In general, stopping `coverage.py` did result in rather low (if any at all) coverage. As a workaround, calling `_cov.save()` (and NOT stopping it at all) resulted in reasonable results. Once `coverage.py` is invoked, there is a rather high chance that QGIS will crash when I want to shut it down. The window becomes unresponsive and I see 100% load on one of my CPU cores. If patient, I can observe that `coverage.py` is then still working and agonizingly slowly producing output on stdout (telling me that certain modules were not imported ... letter by letter by tens of seconds each). This description is supposed to briefly illustrate some of my experiments. If relevant, I can share more details, code and output. > https://coverage.readthedocs.io/en/coverage-5.1/subprocess.html > > I advise you to read it carefully. I missed some things in my first > go-round, in particular this: I am pretty sure that I am NOT dealing with (sub-) processes here. There is just one QGIS process (with a Python thread). My educated guess is that the problem is much closer to threads, but not Python's threads in general (or any of the supported libraries listed in the documentation of `coverage.py`). I am *guessing* that it is the interaction between Qt's event loop and its thread(s) and Python's thread that leads to those issues. Maybe there are multiple Python threads, which would sort of explain some of my observations, but I have not figured out how to get this kind of information yet. I am planning to investigate the idea of integrating `coverage.py` directly into QGIS so I can debug both QGIS' Python code and plugins. I have not (yet) tried to add `coverage.py` to `utils.py` (i.e. at the heart of QGIS' Python integration itself). Maybe integrating `coverage.py` somewhere around (3), i.e. on the C++ side where the Python interpreter thread is started, is also a (potentially better) option, though I have no idea how. I hope this helps to understand where I am coming from. Best regards, Sebastian ---- 1: https://github.com/qgis/QGIS/tree/master/python 2: https://pypi.org/project/sip/ 3: https://github.com/qgis/QGIS/blob/master/src/python/qgspythonutilsimpl.cpp#L154 4: https://github.com/qgis/QGIS/blob/master/python/utils.py 5: https://github.com/qgis/QGIS/tree/master/python/pyplugin_installer 6: https://docs.qgis.org/3.10/en/docs/pyqgis_developer_cookbook/plugins/plugins.html#plugin-content 7: https://github.com/planetfederal/qgis-plugin-template/tree/master/pluginname 8: https://github.com/qgis/QGIS/blob/master/python/utils.py#L298 9: https://github.com/qgis/QGIS/blob/master/python/utils.py#L731 10: https://github.com/nedbat/coveragepy/blob/master/coverage/collector.py#L332 From skip.montanaro at gmail.com Wed Apr 15 05:21:56 2020 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Wed, 15 Apr 2020 05:21:56 -0700 Subject: [TIP] coverage.py vs QGIS In-Reply-To: <9d32e808-480c-ba65-6d75-feb7c6bf787e@pleiszenburg.de> References: <3655df0b-ffc8-52e6-7a99-8766a2941fe6@pleiszenburg.de> <9d32e808-480c-ba65-6d75-feb7c6bf787e@pleiszenburg.de> Message-ID: > > QGIS is a C++/Qt5 app at the moment. At some point in the past, Python > scripting capability, a Python console inside the app and a Python > plugin system were added, see (1). The QGIS-API-to-Python bridge is > based on SIP (2). Python itself is linked into the app through > `libpython`, see (3). Therefore there is no Python command that I can > substitute with coverage. > I would try creating a .coveragerc file and running your program with COVERAGE_PROCESS_START set to refer to it. See if that provokes some meaningful output. ```python > import os > import atexit > from coverage import Coverage > _cov = Coverage(source = os.path.dirname(__file__)) > _cov.start() > atexit.register(_cov.stop) > ``` > > The use of `atexit` did not work at all, ... As in didn't produce a call to _cov.stop? I don't see any mention of running coverage.py when Python is embedded, but I do see an open issue about it. What if you try something more basic. Skip the coverage stuff altogether, but register a visible exit handler: atexit.register(print, "goodbye world") If that doesn't work, maybe someone else has an idea how to provoke coverage output at intervals during the run. Perhaps you could schedule calls to _cov.save at intervals during the run from a plugin. Skip -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivo.bellinsalarin at gmail.com Thu Apr 16 13:50:37 2020 From: ivo.bellinsalarin at gmail.com (Ivo Bellin Salarin) Date: Thu, 16 Apr 2020 22:50:37 +0200 Subject: [TIP] Is there a correlation between code quality and perceived quality of a software? Message-ID: Hello everybody, I have some years of experience in Software Engineering, and during my career I always felt like "if the code quality is poor, the software you get is equally poor". But apparently, this feeling was not always mainstream. In literature, you can find several studies trying to correlate code quality indicators to defects in production. TL;DR there is no wide consensus, even most recent studies reveal low correlation between code coverage and software defects. And, all the kinds of code quality indicators are somewhat proportional to NLOC (number of lines of code). Yet, I had this persistent feeling, and I wanted to demonstrate what I meant at least at my scale. So, I have produced a set of scripts that 1. connect to the software defects repository and extracts the defects characteristics 2. extract the defects fro the commits 3. aggregate the code coverage, hal volume, cyclomatic complexity, and software defects data in a single CSV 4. perform some simple statistics on this CSV I have applied this set of scripts to my own environment, to translate my feeling to facts, and I am pretty satisfied with the result. You can find it in the repository https://github.com/nilleb/static-quality-study But, this looks too beautiful. I would like to receive feedback about what I have done (a critic about the procedure? about the code?). And, if somebody is haunted by the same question, could you please give a try to this script in your environment (so to compare what I got with something else). And, last, if there are any other static code analysis indicators that you usually rely on, I would love to know them. Thanks for your time, Ivo -- @nilleb | https://www.nilleb.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.knuettel at daknuett.eu Fri Apr 17 02:01:27 2020 From: daniel.knuettel at daknuett.eu (Daniel =?ISO-8859-1?Q?Kn=FCttel?=) Date: Fri, 17 Apr 2020 11:01:27 +0200 Subject: [TIP] Is there a correlation between code quality and perceived quality of a software? In-Reply-To: References: Message-ID: <8e445b8081b76329a2bfeef93d194f30cf0cacf7.camel@daknuett.eu> Hi Ivo and PyTesters. You study looks pretty interesting but I am not certain it shows what you wanted to show: You wanted to see whether code quality is correlated with software quality. From what I understand you correlated code coverage with amount of defects. The problem is IMO that the coverage is not really a measure for code quality, is it? I think you could try to use a linter and analyze the amount of problems the linter finds. This might be a better estimator for code quality (or at least the combination of linting and coverage migth be better). Also you write in your study that you measured the complexity (which seems to be a good estimator for code quality) but in the conclusion you do not use the complexity. Another issue seems to be how you measured defects: The software defects database uses fixed defects, right? This means that parts of the software that has many defects that have not been fixed would be considered to have good quality. Particularly bad code might delay the fixing of defects meaning really bad code might seem to have very few defects. Also I hypothesize that coverage and fixed defects may be trivially connected. How do you fix a bug? You think about what the buggy code should do and write a test for that, don't you? (At least this is what I do.) This means that the coverage might just be increased by developers fixing the defects you measure later. So you might have to consider when/how the test has been added to the project. If it is connected to the fix you shouldn't include it in your statistics. That said: I think the study is really interesting and has great potential. I hope you will keep udating it. Also some graphics would be nice. Cheers Daniel Am Donnerstag, den 16.04.2020, 22:50 +0200 schrieb Ivo Bellin Salarin: > Hello everybody, > > I have some years of experience in Software Engineering, and during > my career I always felt like "if the code quality is poor, the > software you get is equally poor". But apparently, this feeling was > not always mainstream. In literature, you can find several studies > trying to correlate code quality indicators to defects in production. > TL;DR there is no wide consensus, even most recent studies reveal low > correlation between code coverage and software defects. And, all the > kinds of code quality indicators are somewhat proportional to NLOC > (number of lines of code). > > Yet, I had this persistent feeling, and I wanted to demonstrate what > I meant at least at my scale. So, I have produced a set of scripts > that > 1. connect to the software defects repository and extracts the > defects characteristics > 2. extract the defects fro the commits > 3. aggregate the code coverage, hal volume, cyclomatic complexity, > and software defects data in a single CSV > 4. perform some simple statistics on this CSV > > I have applied this set of scripts to my own environment, to > translate my feeling to facts, and I am pretty satisfied with the > result. You can find it in the repository > https://github.com/nilleb/static-quality-study > > But, this looks too beautiful. I would like to receive feedback about > what I have done (a critic about the procedure? about the code?). > And, if somebody is haunted by the same question, could you please > give a try to this script in your environment (so to compare what I > got with something else). > > And, last, if there are any other static code analysis indicators > that you usually rely on, I would love to know them. > > Thanks for your time, > Ivo > -- > @nilleb | https://www.nilleb.com > > _______________________________________________ > testing-in-python mailing list > testing-in-python at lists.idyll.org > http://lists.idyll.org/listinfo/testing-in-python -- Daniel Kn?ttel