[TIP] Is there a correlation between code quality and perceived quality of a software?
Ivo Bellin Salarin
ivo.bellinsalarin at gmail.com
Thu Apr 16 13:50:37 PDT 2020
I have some years of experience in Software Engineering, and during my
career I always felt like "if the code quality is poor, the software you
get is equally poor". But apparently, this feeling was not always
mainstream. In literature, you can find several studies trying to correlate
code quality indicators to defects in production. TL;DR there is no wide
consensus, even most recent studies reveal low correlation between code
coverage and software defects. And, all the kinds of code quality
indicators are somewhat proportional to NLOC (number of lines of code).
Yet, I had this persistent feeling, and I wanted to demonstrate what I
meant at least at my scale. So, I have produced a set of scripts that
1. connect to the software defects repository and extracts the defects
2. extract the defects fro the commits
3. aggregate the code coverage, hal volume, cyclomatic complexity, and
software defects data in a single CSV
4. perform some simple statistics on this CSV
I have applied this set of scripts to my own environment, to translate my
feeling to facts, and I am pretty satisfied with the result. You can find
it in the repository https://github.com/nilleb/static-quality-study
But, this looks too beautiful. I would like to receive feedback about what
I have done (a critic about the procedure? about the code?). And, if
somebody is haunted by the same question, could you please give a try to
this script in your environment (so to compare what I got with something
And, last, if there are any other static code analysis indicators that you
usually rely on, I would love to know them.
Thanks for your time,
@nilleb | https://www.nilleb.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the testing-in-python