Category Archives

Testing Tenets

Tenet: A test is successful when the software under test fails.

This post is the seventh and final post in a seven part series covering my seven tenets of software testing.

As I discussed in tenet five, all software has bugs, and the goal of testing is to find them.

One of the ironies of testing is that when a test case runs without error we give it a big green tick, and say that it has passed. By doing this, we are incorrectly reinforcing that a test is successful if it doesn’t find any bugs. If we want to find defects and improve the quality of our software, we want our tests to fail.

This is a subtle but very important difference in testing approach. A good analogy are tests performed by a doctor. When a doctor returns with test results for your sore leg, the last thing you want to hear is: “That pain in your leg, the tests didn’t find anything, so you are fine, just ignore it.”

There is a danger when we change our expectation away from “All tests must pass all the time”, to “We want tests to fail”. The danger is that our expectation will instead become: “Most of our tests should fail and it’s ok to have tests failing for weeks on end.”

Ideally, your goal should be to have a high fidelity test system, with a core set of automated regression tests, that suffers an occasional failure as developers evolve the application over time. In addition to the regression tests, you should be adding tests for things like known defects, and new tests to expand the test coverage. Ideally, you should expect any new test to fail the first few times it is run. When the issue you found is resolved, the test should execute without failure and then stay that way.

It is also important that your test suite has a high signal to noise ratio. As the number of tests increases, so will the amount of analysis that you need to perform when there are test failures.

Now if only I could charge for tests like a doctor does …

Testing Testing Tenets2 comments

Tenet: A developer should never test their own software.

This post is the sixth in a seven part series covering my seven tenets of software testing.

Unit testing is good. Test Driven Development (TDD) is great, and it is something that every developer should do. However, like most development techniques, TDD is not a silver bullet. TDD is primarily focused on defining how a class should work, implementing that class, and then verifying the implementation performs as expected. This post isn’t about TDD, or unit testing. It is about application level testing. I feel it is important to mention it because it is the “exception to the rule”, where the “rule” that is the true subject of this post.

A couple who are good friends of my wife and I, recently had their first child. The child’s father is an orthopaedic surgeon, who, during his years as an emergency ward doctor, has delivered several babies. Before the birth I asked him, as he is qualified, and experienced, if he wanted to, could he arrange to deliver the baby himself? He answered pretty much as I expected. He would never consider delivering the baby himself, as he had too much emotional investment in the patient, his wife, and the event itself.

What the heck does this have to do with testing I hear you ask? Just as surgeon won’t operate on friends or family unless it is an emergency, a developer shouldn’t test their own code. The reason for this is clear; A developer cannot test their own code, because they simply have too much emotional attachment to it.

Development and testing are two diametrically opposed disciplines. Development is all about construction, and testing is all about demolition. Effective testing requires a specific mindset and approach where you are trying to uncover developer mistakes, find holes in their assumptions, and flaws in their logic. Most people, myself included, are simply unable to place themselves and their own code under such scrutiny and still remain objective.

Let’s say that a developer has to write some code that calculates a sales commission, where the commission is normally 5%, but rises to 7% for sales over ten thousand dollars, and they implement the following code.

if  (SalesAmount < 10000.00)
    Commission = SalesAmount  * 0.05;
    Commission = SalesAmount  * 0.07;
The developer has made the assumption that a sale of exactly $10,000 should earn 7% commission. If they are testing this code as well they might write tests similar to the following:
public void VerifyLowerCommission()
public void VerifyHigherCommission()
The problem with these tests, is that even though they achieve 100% code coverage, the developer has based them on the same assumptions and thought processes they used when writing the code itself. In this contrived example, let’s assume the actual calculation should have been based on commissions greater than or equal to $10,000. So, even though these test cases would pass, the calculation is actually wrong. This type of bug would probably manifest itself infrequently, as it would require a sale of exactly $10,000 to cause a problem and would otherwise remain dormant.

Having someone impartial write the tests for the code increases the chance of finding that type of issue significantly. This helps because they will have make their own ideas about how things should work, and challenge the developers assumptions.

So of course the title of this post is to get you thinking. Of course developers should test, but, they should test someone else’s code, after they have checked their own first, and then passed it to their friendly tester to get them to really put it through it’s paces.

Testing Testing Tenets1 comment

Tenet: To build it, you have to break it

This post is the fifth in a seven part series covering my seven tenets of software testing.

Let’s say that you are a modern, test-driven developer. You run your tests and the tests all pass. Great, your code must be bug free, let’s ship! Umm, not quite. Are you sitting down? Good, I need to tell you something. Your software has bugs.

It doesn’t matter if you are a graduate fresh out of college, Don Box or Anders Hejlsberg. If you are writing a program that does anything remotely useful, it will have bugs. In Code Complete, Steve McConnell presents some statistics of exactly how many bugs you should expect to find.

Setting the benchmark is the CMM poster child, the NASA team that writes the software for the space shuttle. The NASA team has achieved the impressive statistic of zero bugs for every 500,000 lines of released code.

For all the negative criticism about buggy software that Microsoft have received over they years, they do a pretty good job with 1 defect per 2000 lines of released code. By comparison the rest of the software industry achieve between 15 and 50 errors per 1000 lines of released code. 1 It is Important to note that these statistics are for bugs in released code, i.e. after testing has been completed. Even the impressive NASA numbers don’t mean that there isn’t any bugs in their code, especially before it is released. A much higher number of bugs will have been found and resolved before the code went out the door. So, if your code is say 10,000 lines lines long, you should expect, at a minimum, to have between 150 and 500 defects. So, if the bugs are there, how do I find them?

Good testers will generally (sometimes subconsciously), use a technique known as error guessing. Error guessing is all about trying to throw something at the application that the developers haven’t thought of, otherwise known as a negative test.

Negative tests are basically trying to come up with permutations of data that the application has not been designed to handle. For example, an int32 in .net can handle numbers from -2,147,483,647 to 2,147,483,647. What is the behaviour of an application when an integer is set to 2,147,483,647 and then 1 is added to it?

Negative tests are effective at finding bugs because they do things that the developer may have never considered when they are coding the application. They also represent the types of things that real users may do to a system, sometimes bringing it to it’s knees. Ideally we don’t want our users to do that on a regular basis, or they won’t be users for long. We need to find the bugs, that we know are there, before our end users do. The best way to find the bugs is to do our damnedest to try an break the application, in parallel to construction, starting the day that the compiler produces some output.

Breaking the application as it is being built is important. It’s important because the longer a bug sits undiscovered the more it will cost to remove. You want to find those bugs as early as possible. when they are the cheapest to fix.

The best analogy to this technique is the development of a formula one engine. Whilst the exact techniques are closely guarded secrets, the engine developers will probably push the engine and its components to the absolute limit, identify the cause of failure, resolve the problem and then repeat the process. The alternative is to destroy engines race after race as the limits of the engine are discovered.

I’m sure Mark Webber doesn’t expect to have to be an engine test guinea pig during a race. Similarly, your users shouldn’t be expected to find your bugs for you either.


1 Steve McConnell 1993, Code Complete, Microsoft Press, pg. 612-613.

Testing Testing Tenets0 comments