Category Archives

Testing

Walking the test documentation tightrope.

Balancing the level of detail and formality in your test documents can be somewhat tricky, especially to the tester who has not been battle hardened by the experience of several different projects.

Over the years I have seen and used many different methods for my testing documents and there are some things that work really well and some that seem like a good idea when you start out, but quickly can become very cumbersome to use and maintain.

In a perfect world there would be one format for a test plan, and one for a test case that everyone could use. Everyone would rejoice, and the Von Trapp family continue to sing new and exciting songs as part of their daily lives.

Appropriate levels of testing documentation

In my experience I have seen, (and used) two extremes of testing documentation. In the dotcom boom I managed a great technical test team where we had no formal test documentation to speak of. There was documentation, just not very much of it around the planning and execution of tests.

What gave the customer confidence, and kept them happy was the fact that there was a ton of reporting around the defects that we were finding, and through the verbal updates we would continuously provide, I now refer to this as “Post-it™ test planning”.

In direct contrast to Post-it™ test planning is the test planning method favoured by waterfall methodologies where step-by-step test documents are written so that anyone’s mother could test the application, with little or no prior knowledge.

The ideal level of formality is somewhere between the two extremes, shown in the following diagram.

A good approach is to iterate through your tests, adding detail as you go. Ideally you should be able to stop when you can answer yes to all the following, as required for your project or organisation

  • Is the test repeatable by an of our testers, so that anyone can execute the test and get the same result?
  • Is there enough detail so that I can satisfy an audit?
  • Can I know with absolute certainty what has, and has not been tested?
  • Are the test cases in a format that can be quickly and easily updated, e.g. Excel or a database?
  • Can I easily answer any questions about current progress, or coverage levels?

To conclude I will present some fictitious test cases for Notepad to show the difference between the approaches. A good test case example

Applicaiton: Notepad

Test Area: File Save Dialogue

Requirement: REQ0342

Test: Verify that notepad can successfully save a file when the length of the file name greater than 255 characters, plus the .txt at the end.

A bad test case example

Applicaiton: Notepad

Test Area: File Save Dialogue

Requirement: REQ0342

Step 1: Press the start button.

Step 2: Select run.

Step 3: Type “Notepad.exe” and press return.

Step 4: Verify that Notepad opens successfully.

Step 6: Type “This is a test” into the new document.

Step 7: Select the file menu then save.

Step 8: Enter a file name that contains 255 characters, then add the .txt to the end of the file name.

Step 9: Press save.

Step 10: Verify the file saved successfully, with no errors.

Both tests achieve the same results, but one is much more manageable, and requires significantly less time to write.

Testing1 comment

Seven tenets of software testing

In both this MSDN magazine article and this episode of the .net show, Don Box introduced four fundamental tenets for developing service based or connected systems.

  • Boundaries are explicit
  • Services are autonomous
  • Services share schema and contract, not class
  • Service compatibility is determined based on policy

That inspired me to develop my own list of guiding principles that apply to software testing. These tenets are documenting some key learning’s from over the years working as a Test Manager, Senior Consultant and Development Manager for various software development shops.
  • You can’t test everything so you have to focus on what is important.
  • If you are going to run a test more than once, it should be automated.
  • Test the product continuously as you build it.
  • Base your decisions on data and metrics, not intuition and opinion.
  • To build it, you have to break it.
  • Apart from Test-Driven Development, A developer should never test their own software.
  • A test is successful when the software under test fails.
In a series of future posts I will be expanding on the tenets, explaining them in detail, providing links to reference materials; hopefully providing something helpful for you to use on your projects.

Testing Testing Tenets0 comments