Category Archives

Testing Tenets

Revisting the testing tenets

Back in 2005 I wrote a series of seven blog posts called The Seven Tenets of Software Testing. These posts have been buried deep in this site, so I have added a new page – Tenets of software testing – that links to all these origional articles, hopefully making them easier to find if you are new to my software testing blog.

Advanced Testing Techniques Test Management Testing Tenets0 comments

Tenet: If you are going to run a test more than once, it should be automated.

This post is the second in a seven part series covering my seven tenets of software testing.

Original post (31 Jan 2005)

Automation explained Test automation means different things to different people in different contexts. When I talk about automated testing I am referring to; A method of executing a test without human intervention, that would otherwise require it. In practical terms that may mean a nUnit test, a GUI test using a commercial testing tool, or a test written using an application’s internal scripting language. The technology is not the key concern, the fact that the test can be run 100% without any human involvement is the key.

Why automate? The primary reason to automate tests is time. As a tester, you always need more time.

Automation can provide immense reductions in the amount of time required to execute tests. My first introduction to automated testing, in 1997, managed to condense 5 days of manual testing effort into 1 hour of automated execution for a 97.5% reduction in execution time.

Think about that for a moment, by automating our tests, we have achieved the equivalent of adding 40 additional testers to the team, for a fraction of the price. In addition I had taken the drudgery from my teams work day, increasing morale. More importantly, it lets your testers perform more ad-hoc testing which is much more effective than performing the same manual test over and over again.

So that’s it then, should we just retrench all our testers, and use automation instead? Well, no.

Despite a recent example where Microsoft retrenched 62 Longhorn testers, citing automation as the cause. Test automaton is generally used to help increase the amount of test coverage that can be achieved with a given schedule and resources, not reduce testing head-count.

When implemented correctly, with enough hardware, automating your tests allows you to execute all your tests, at least once a day, every day. When combined with a daily build, you have one (if not the), most powerful testing tool on the planet.

Build Verification or smoke testing Every night after a successful compile, the daily build should automatically be packaged and deployed into a test environment, and smoke tested with an automated build verification test (BVT). The term “smoke” test is derived from the idea of quickly plugging in some electronics to test them, checking that no smoke comes out. While ideally you would run all your tests, typically the smoke test is a carefully selected subset of your full automated tests that can be run in about 10-20 minutes or so.

Automated regression testing regression n. “[To] relapse to a less perfect or developed state.”

Regression testing has the goal of ensuring that the quality of an application doesn’t decline as features are added. Regression testing has a significant challenge. When an application needs it most, there is less time and resources available to perform all the tests that were executed when the product first shipped.

Without automation, the typical approach is to perform localised regression testing, which is limited to directly testing the area around the changes.

With an automation suite in hand, it is simply a matter of executing all the tests that were developed previously. This allows the maintenance programmers to make frequent releases, and allows the quality of the application to improve over time.

This is particularly important in light of trend that Fred Brooks suggests in his classic work, the mythical man-month, where each defect that is fixed has a 20-50% chance of introducing another.

The largest automation project that I have personally worked on was a huge effort which my client ran their $1M+ investment of tests on a rack of 25 dedicated machines that pounded away relentlessly, shortening their regression testing cycle by 75%, and that was partially automated to begin with!

Update (Tue, 8 Feb 2005) Sara Ford, a tester in the Visual Studio team at Microsoft has blogged about this topic as well here.

Update (Wed, 2 Aug 2006) A recent post by Bj Rollison, For those of you dreaming the 100% automation dream…please wake up! makes a very good point that, an goal 100% automated tests is completely unrealistic. I agree with his post, feel that I need to add a clarification to this one.

With any form of testing, you have to focus on what is important. Blindly trying to automate everything just to reach an arbitrary automation goal is contrary to that tenet. Does that mean that this tenet is wrong then? No, I don’t think so. There is a significant difference between should be automated and must be automated. In my experience, most projects need a heck of a lot more automation than they have. If this tenet was called, “you must automate what you can”, the whole point of this tenet would have been lost. Personally, the highest I have ever achieved on a project was 95% automated, and that lasted for all of 1 day, before we added more tests.

The #1 priority for a tester on any project, should always be to finding and log issues, however you find them. However, investing in the right amount of automation can make that a whole lot more effective.

Testing Testing Tenets7 comments

Assert me!

Following on from my previous post referring to Joe Schmetzer’s work on unit testing anti-patterns, I felt the need to share my observations from some recent code reviews that I have performed.

Always include at least one assert in each test

If I hadn’t seen a unit test with no asserts myself, I would find it hard to believe. Each nUnit test must contain at least one assert statement. Without the assert, you will get a test pass regardless of what the code does. For example the following test:

[Test]
public void TestName()
{
    Console.WriteLine("\nThis is a test with no asserts.");
}
Will produce the following output in nunit-console.
.
This is a test with no asserts.

Tests run: 1, Failures: 0, Not run: 0, Time: 2.541 seconds. As you can see this is not marked as a failure or not run, it is recorded as a pass. This is a common error that all automated test tool developers tend to make. The default result for any automated test should be to fail (or some other non-passing state), unless explicitly passed during test execution.

Microsoft have the right idea for their generated test stubs in Whidbey. In VS 2005 it places an Assert.Inconclusive statement in the body of any test it auto-generates. That is a good first step, but ideally it should be the default behaviour for any test with no asserts, and not require that the Assert.Inconclusive statement to even be there.

Back to the code review, to be fair to the developer involved, the test code was testing an asset object which is only one letter different to an assert statement, so it was pretty easy to miss.

Where multiple asserts are used in a test, include a descriptive string.

If your test has more than one assert statement, each statement needs to have a nice, descriptive string explaining what is being validated. The reason for this is if you have a test with a bunch of asserts that fails, you will have to debug through the test code to find out which assert failed. For example the following test:

[Test]
public void TestIntegers()
{
    int FirstInteger = 1;
    int SecondInteger = 1;
    Assert.AreEqual(FirstInteger, 1);
    Assert.AreEqual(SecondInteger, 2 );
}
will produce the following output in nunit-console.
.F
Tests run: 1, Failures: 1, Not run: 0, Time: 2.606 seconds

Failures: 1) Teknologika.Tests.BlogPostTests.TestIntegers : expected:<1> but was:<2> The challenge here of course is which test failed? In the contrived example above, it is obviously the second one. If it was an actual test, however, we would most likely have to debug through the test code to find out. If we add some comments to the test, as follows:

[Test]
public void TestIntegers()
{
    int FirstInteger = 2;
    int SecondInteger = 1;
    Assert.AreEqual(FirstInteger, 1, "Failed : Verifying that FirstInteger is 1." );
    Assert.AreEqual(SecondInteger, 2, "Failed : Verifying that SecondInteger is 2." );
}
Then we get a much more meaningful output, which let’s us focus straight on the assert that is failing.
.F
Tests run: 1, Failures: 1, Not run: 0, Time: 2.791 seconds

Failures: 1) Teknologika.Bulldozer.Tests.BlogPostTests.TestIntegers : Failed : Verifying that FirstInteger is 2. expected:<1> but was:<2> Refactor your tests into smaller more atomic tests

The previous example is hiding something. There is more that one failure in the TestIntegers test, but we are only seeing the first one, as nUnit aborts the test once the first failure is reached.

If this was a real test, and the each of the failures were being caused by different things we would probably fix the first one, and re-run the tests only to have the test fail on the second error.

Incidentally, for my contrived example, where both our asserts have the same expected and actual values, if we hadn’t added a description, our test results would be identical, so you might still think that it was broken. Even though we have fixed the first error.

So if we refactor our test into two smaller atomic tests like this:

[Test]
public void TestFirstInteger()
{
    int FirstInteger = 2;
    Assert.AreEqual(FirstInteger, 1, "Failed : Verifying that FirstInteger is 1." );
}

[Test] public void TestingSecondInteger() { int SecondInteger = 1; Assert.AreEqual(SecondInteger, 2, "Failed : Verifying that SecondInteger is 2." ); } then our results show the full story.
FF
Tests run: 2, Failures: 2, Not run: 0, Time: 3.209 seconds

Failures: 1) Teknologika.Bulldozer.Tests.BlogPostTests.TestFirstInteger : Failed : Verifying that FirstInteger is 1. expected:<1> but was:<2>

2) Teknologika.Bulldozer.Tests.BlogPostTests.TestingSecondInteger : Failed : Verifying that SecondInteger is 2. expected:<2> but was:<1>

The bottom line here is that spending a small amount of time when writing your tests can make things a whole lot easier when it comes to investigating your test failures.

Testing Testing Tenets0 comments