[TIP] Is there a correlation between code quality and perceived quality of a software?

Daniel Knüttel daniel.knuettel at daknuett.eu
Fri Apr 17 02:01:27 PDT 2020


Hi Ivo and PyTesters.

You study looks pretty interesting but I am not certain it shows what
you wanted to show: You wanted to see whether code quality is
correlated with software quality. From what I understand you correlated
code coverage with amount of defects.

The problem is IMO that the coverage is not really a measure for code
quality, is it? I think you could try to use a linter and analyze the
amount of problems the linter finds. This might be a better estimator
for code quality (or at least the combination of linting and coverage
migth be better). Also you write in your study that you measured the
complexity (which seems to be a good estimator for code quality) but in
the conclusion you do not use the complexity.

Another issue seems to be how you measured defects: The software
defects database uses fixed defects, right? This means that parts of
the software that has many defects that have not been fixed would be
considered to have good quality. Particularly bad code might delay the
fixing of defects meaning really bad code might seem to have very few
defects.

Also I hypothesize that coverage and fixed defects may be trivially
connected. How do you fix a bug? You think about what the buggy code
should do and write a test for that, don't you? (At least this is what
I do.) This means that the coverage might just be increased by
developers fixing the defects you measure later. So you might have to
consider when/how the test has been added to the project. If it is
connected to the fix you shouldn't include it in your statistics.

That said: I think the study is really interesting and has great
potential. I hope you will keep udating it. Also some graphics would be
nice.

Cheers
Daniel

Am Donnerstag, den 16.04.2020, 22:50 +0200 schrieb Ivo Bellin Salarin:
> Hello everybody,
> 
> I have some years of experience in Software Engineering, and during
> my career I always felt like "if the code quality is poor, the
> software you get is equally poor". But apparently, this feeling was
> not always mainstream. In literature, you can find several studies
> trying to correlate code quality indicators to defects in production.
> TL;DR there is no wide consensus, even most recent studies reveal low
> correlation between code coverage and software defects. And, all the
> kinds of code quality indicators are somewhat proportional to NLOC
> (number of lines of code).
> 
> Yet, I had this persistent feeling, and I wanted to demonstrate what
> I meant at least at my scale. So, I have produced a set of scripts
> that
> 1. connect to the software defects repository and extracts the
> defects characteristics
> 2. extract the defects fro the commits
> 3. aggregate the code coverage, hal volume, cyclomatic complexity,
> and software defects data in a single CSV
> 4. perform some simple statistics on this CSV
> 
> I have applied this set of scripts to my own environment, to
> translate my feeling to facts, and I am pretty satisfied with the
> result. You can find it in the repository 
> https://github.com/nilleb/static-quality-study
> 
> But, this looks too beautiful. I would like to receive feedback about
> what I have done (a critic about the procedure? about the code?).
> And, if somebody is haunted by the same question, could you please
> give a try to this script in your environment (so to compare what I
> got with something else).
> 
> And, last, if there are any other static code analysis indicators
> that you usually rely on, I would love to know them.
> 
> Thanks for your time,
> Ivo
> -- 
> @nilleb | https://www.nilleb.com
> 
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
-- 
Daniel Knüttel <daniel.knuettel at daknuett.eu>




More information about the testing-in-python mailing list