Delivering good quality systems on schedule on budget is every software project manager's goal. Early experience triggered Mike Harding Roberts' interest in quality and led him to some techniques that can contribute to project success.
We have all read about the costly consequences of computer systems that don't work. But how do you avoid these problems - how do you deliver good quality software? And is it difficult, expensive and does it take a long time?
Early personal experience triggered my interest in quality. One of my first jobs was system testing. It became obvious the programmers hadn't bothered to check their work and expected us in the test team to find their mistakes. It annoyed me that they were getting away with this - not least because they were paid a lot more than I was at the time.
On a later project user requirements had not been properly defined. When writing a program you'd reach a point of not knowing what should happen next, so you asked the users. But they were so overwhelmed we had to devise a queuing mechanism: you put your name on the bottom of the list and when you got to the top - several days later - it was your turn. Trouble was, the answer you got was often at odds with the answer given to another programmer by another user, and nothing much worked when we came to testing. The project took rather a long time.
So when I started to manage projects I insisted we did it right, and checked that it was right, at every stage.
There is no technique that can be applied at just one single time during development that will ensure a defect free system. Every developer knows this. A prerequisite of quality in the delivered system is quality at each preceding stage.
How do you ensure good quality at each stage? There are no magic bullets, just simple good practice.
The foundation has to be good project definition: this stage defines the business need/opportunity, selects an outline solution approach and defines project scope. (Are we building an information-only website, an interactive site, or a site fully integrated with back-office order handling systems?) Without this solid foundation the project is in danger of severe subsidence during construction.
Next we must define detailed user requirements. Every item of business data (e.g. account number) that will be handled by the system is defined. Every business process the system must perform is defined (validations, lookups, calculations). The contents of every business output (letters, invoices) is defined. It should be possible to execute the defined business process. This is the key to a most valuable quality assurance technique.
Meetings to read through and inspect the business requirements are good at finding spelling mistakes but not so good at determining the completeness and correctness of the defined business process.
Simulations can be very effective at finding gaps and inconsistencies in the business requirements. Test cases are prepared. For example we make up a customer name and address and customer number and whatever other data will go into, say, the order handling process. We execute the defined process by hand. If the requirements say 'multiply item quantity by item price' we do it with a calculator. We write down the results and any other information the requirements say must be recorded. We produce by hand whatever outputs are required, such as invoices. Just as a person would do if administering the business by hand, just as the computer will eventually do. The simulation continues until every part of the business process has been exercised.
Usually the first attempt at simulation breaks down as it becomes obvious the business process is incompletely defined - it does not say what happens next. Simulations feel expensive but it is cheaper to find errors now than later in the project. Finding errors early also reduces risk: the later you make a change - even a change to correct an error - the greater the risk of introducing a new error.
Who performs the simulation? The project team: business users, design team and the quality leader.
The benefits are many. Apart from the primary goal of identifying and forcing resolution of errors and omissions, simulation forces the team to understand the end to end business process. It also generates test data that will be reused later.
Inevitably some issues remain open at the scheduled end of the requirements stage. The project manager must judge whether they are significant enough to warrant delaying stage completion until they are resolved.
Without a simulation many problems would have remained concealed, to be found more expensively later. And if you can't do a simulation at all because user requirements aren't defined in sufficient detail then you certainly haven't finished the requirements stage.
I met someone recently who told me his project had not been able to define the requirements for a particularly tricky part of the system by the scheduled requirements stage end date. They started the design stage anyway - claiming the requirements stage had been finished - saying they'd come back to the tricky bit later. When they did come back to it, much later, 'finished' parts of the design began to unravel which caused much longer delays than had they been honest and delayed requirements stage completion.
Getting the requirements complete and correct will probably make a bigger contribution to a quality end product than all the other quality assurance techniques put together.
The design stage produces a design the business users can understand. What will the system look like, what will it do and when, what will it produce. When designing, it helps to bear in mind testability: an ultra clever design may be satisfying for the technical whiz kids but may give the test team a headache. If it's easy to test you'll probably find more of the errors. Keep it simple.
Team co-location helps: screens, widows, invoices, etc can be designed more rapidly and dynamically using prototyping techniques if the user and IT people are sitting together.
As each part of the design is completed it is inspected. When the whole design is completed it is simulated. By now issues unresolved at the requirements stage will have been resolved. These areas get special attention during design simulation to ensure resolution was real and not just apparent.
Experience of doing the requirements simulation will enable better simulation now. Better usually means combinations of data that exercise more of the logic. (Let's see what happens if the credit limit field is blank...)
Design simulations also test the logic and data that are only there to make the system work internally. System interfaces are simulated. Ask other teams: what would your system do if it received an interface with these data values on it? I recall showing another team a list of data elements that would be on an interface. They agreed these were the right data elements. It was only when we did the design simulation and passed them an interface with actual data values that we learned a blank date would cause their system to crash. We changed our design. That one would otherwise have surprised us in system test.
Who does the design simulations? The users, designers, key development team members, the quality leader and test team members. By the end of the simulation, the test team have the understanding they'll need to test the system. They will also have the test data used for simulation.
It's a funny thing, but people will sometimes balk at spending a few days doing a design simulation, but won't bat an eyelid at the prospect of spending the same or much more time system testing.
Having finished design and resolved the issues, the developers can do their program design. It is useful to inspect test cases at the same time as program designs to verify that 100% of the program logic will be exercised by the tests - and ensure test cases exist. It can also mean people design to make testing easier.
At the end of the design stage the quality leader metamorphoses and becomes the test team leader. In parallel with program design and development he and the test team are busy creating system test data and expected results. Sometimes, having worked out what result is expected according to the design it is obvious the result is incorrect - an error in the design has been found. The correction can be made perhaps even before the program is written.
System test should, in theory, not take very long: just run the tests and see if the system produces the expected results - all the time consuming thinking has already been done. In practice it's never quite like that! But inspections, simulations and the expected results approach will find errors before testing starts which will shorten testing elapsed time.
So, is this approach difficult, expensive and does it take a long time?
Difficult only in that it requires planning, pro-activity and discipline, but easier in that the project should be more predictable and there should be less panic at the end.
Expensive? Well, I'd have to agree with Crosby - it's at least free: what you invest at the requirements and design stages you should at least get back in reduced development and test costs. Not to mention the benefits of a better quality system.
Will it take a long time? No. It is all too easy in a systems project to skate through the early stages, apparently on schedule, then spend a very long time testing out the errors and testing in the right requirements. Doing it right at each stage should make the project take less time. But you need to be strong to resist the siren voices saying 'never mind all that checking, just get on with the next stage'.
Quality management is one of the topics covered in this free online
Project Management Book
written by Mike Harding Roberts.
Six Sigma Quality Management
Project Management: Maintenance Vs New Development
Risk Management in Software Development Projects
The Tale of Three Project Managers
Towards a Project-Centric World
Project Management in the Public Sector
Project Management Proverbs
So You Want To Be A Project Manager?
Can Construction Industry Project Managers Manage Software Development Projects?
Project Management Book
Copyright M Harding Roberts