Project Management Book

...chapter 12 continued


One more example of a measurement. You've just finished the project shown in the table below and you're about to start the next similar project. On the just finished project the team found 3 errors when they inspected the PDD and 41 when they inspected the requirements document. When they inspected the user functions design document they found 120 errors but they realised 3 of them had in fact been errors in the requirements, which had obviously been missed at the requirements inspection.

(Look at the column headed 'UFD': 120 is the number of errors found at the UFD inspections and the number 3 just above is the number the team judged were errors in the requirements document. For example, the requirements said 'invoice amount is quantity x discount%' and the UFD faithfully reproduced that 'invoice amount is quantity x discount%' but at the UFD inspection it dawned it should actually say 'invoice amount is quantity x price x (100-discount%)'. Though few errors are quite that obvious!)

Step in which error found             

Step in which error caused

PDD

Req

UFD

ITD

B&UT

Sys

Live

%

PDD

3

0

0

0

0

0

0

100%

Requirements

 

41

3

2

1

0

2

84%

User Functions Design

 

 

120

4

9

7

1

85%

IT Technical Design

 

 

 

185

49

41

0

67%

Build & Unit Test

 

 

 

 

292

68

11

79%

System & Accept Test

 

 

 

 

 

141

15

90%

Live Running

 

 

 

 

 

 

15

 

The IT team found 185 errors in the technical design and even 2 of those were in fact requirements errors that had been missed both at the requirements inspections and the UFD inspections.

If you wait long enough you even can put errors found in live running on the chart. And 2 of the errors found in production were errors in the requirements which had got right through all the nets.

Is there any particular weakness on the production line quality-wise that you would like your team to address before the next project begins?

At the right hand side of the IT technical design row there a figure of 67% - what's that telling us? When the IT team checked their technical design they found two thirds of the errors, and one third of the errors that were under their noses in black and white, they missed. Is that good, bad or criminally appalling? It's dreadful. And the project manager should say so and invite the team to fix it before the next project. If it transpired, say, that the IT technical design hadn't been well structured which made it difficult to check it all fitted together properly, the project manager might insist that before the next project the team figure out how to construct the IT technical design so that it can be cross checked properly, and he would be looking for an efficiency nearer 90% next time.

Identify which step in your process is weakest and improve it - and keep doing that for ever.

You may be thinking that the measurements we have looked have taken no account of the severity of the errors nor the cost to fix them, and you'd be right - you are starting to devise some of those hundreds of ways of measuring quality-related things. The danger though is that one gets carried away with devising ever more 'correct' measurements. A crude, even not very accurate measurement that drives improvement is infinitely preferable to a statistically perfect one that doesn't.

We mentioned a couple of pages ago the sort of measurements the system test team will use: errors found per week (probably broken down by severity), number open, average time to fix, etc. Extrapolating from these numbers after the first few days or weeks of system test should enable good prediction of how long testing is going to take and how many errors will be found in total.

And really good quality management will enable the following sort of discussion. Suppose at the beginning of the project we try to get commitment of user resources to do thorough user functions design inspections, but for apparently good business reasons they cannot be made available. We will be able to predict how many extra errors this will result in both in system test and in live running. We can then say to the sponsor: "by not investing this £30K's worth of user effort in UFD inspections there will be about 20 extra bugs in the system when it goes live which will cost you about £200K to sort out - what would you like to do?" Now, the sponsor might decide the current business need is overriding and he'll pay the failure cost. Or he might decide the current business need isn't that important and direct the manager of those users to make them available to get the UFD right. Quality becomes a thing we can have a business-like discussion about rather than something abstract that is hard to get a hold of. But we are approaching a PhD level of sophistication here which, regrettably, all too few software projects aspire to let alone achieve.


Cause analysis

In the best run projects someone, perhaps the quality leader, gets the team together every couple of weeks to ask: what errors have we made in our work in the last two weeks, what caused us to make them, what can we do to remove or reduce those error causes? This is not about finding someone to blame for the errors but we do need to know who has made them: for example if the contractors are all making the same mistakes over and over perhaps some training is needed to address the cause. The aim is to identify what we can do to eliminate the causes of errors that we as a team are making. These meeting are sometimes called quality circles.

Usually when one starts to run these causal analysis meetings some very 'obvious' error causes are identified which can be fixed quite easily, and everyone wonders why they weren't sorted out ages ago. Sort them out now.


Quality lessons learned review

At the end of the stage a two hour team quality lessons learned meeting is held. The team look back over the whole stage and identify the top two or three causes of errors and work out what could be done to engineer those error causes out of the project process in future. Again, try to avoid heaping all the blame on people who aren't there, and focus on what you can do to improve your part of the process. Though if most of the problems experienced in the stage were in fact caused upstream in an earlier stage the team may want to develop constructive proposals to take to those who were involved in that earlier stage. Primarily, though, the team should be identifying what they can do to remove error causes before the next stage or the next project begins. Equally, when these meetings are first held some quick wins can usually be had - things that can easily be changed that will significantly improve the initial quality of work and/or the timely and effective detection and correction of errors.

Some would strongly advise that the project manager should not be present at causal analysis meetings or these end of stage quality lessons learned meetings as this can inhibit soul baring.


...next



Project Management Book
Copyright M Harding Roberts 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024
This book must not be copied either as is or in modified form to any website or other medium
www.hraconsulting-ltd.co.uk


Home   Sitemap   Contact Us   Project Management Articles   Project Management Book




Privacy Policy and Cookies