The following is the description of a functionality that could be named 'Effort evaluation' and that is based on this 'churn code' metric.
This could be a widget or a plugin.
The objective is to show:
. between any 2 versions : you must be able to choose 2 different snapshot between all existing ones
. total number for each snapshot, varation (total & %) of
. added / updated / deleted
. for different metrics : to be defined but at least number of components, LOC and CC
. and distribution of CC according to 3 or 4 thresholds, for instance: Simple, Medium, Complex, Very Complex
For example, between V4 and V5, the total number of components goes from 1 000 to 1 600 which means a variation of 600 / 60% components (same to be done with LOC and CC or other metrics).
This variation is based on :
. 1 000 added components, of which 600 Simple, 300 Medium, 60 Complex and 40 Very Complex
. 400 deleted components, of which ...
. X updated components, of which ...
Now, based on a matrix where you can define an effort in man/days for added/updated/deleted components according to their level of complexity, you are able to calculate an estimated effort between the 2 versions.
For example, if adding a component Simple = 0.2 m/d, Medium = 0.5 m/d, Complex = 1 m/d, Very Complex = 2 m/d,
the evaluated effort for added components in the example would be 600x0.2 + 300x0.5 + 60x1 + 30x2 = 120 + 150 + 60 + 60 = 390 man/days.
Add the same effort for updated / deleted component and you have the Total estimated effort between 2 versions.
Add a financial value of the man/day and you have the Total estimated cost.
This can not be very precise when compared to the real effort/cost, but it gives an indication. For instance, an outsourcer says it costs him 200 man/days and you see only 80 man/days ... this would need some explanation. Once, a new application was 3 months late and the customer asked an audit. Analyzing 2 development versions showed that 50% of the code has been deleted (and also high % of added/updated components) because the requirements did change during the development, so the effort and the delay was justified.
It is also useful for benchmarking project teams (in-house or outsourced). For instance, team A is always late and people are not happy but you can justify it while team B is never late but in fact, has added a lot of very complex component so people should not really be happy.
This is based on the evaluation process that do most outsourcers when doing an estimation for a RFP: evaluate the functionalities, distribute them between S/M/C/VC components and quantify the effort.
Normally, the matrix where you can define this should be organized by technologies (reason why you need an administration screen, and this probably a plugin).
You also have to think well what is a component for each technology. Probably method for Java, proc/function for PL/SQL, probably program for Cobol or Abap (paragraph or procedure does not make sense for these technologies).
Do not hesitate to ask for any precision.