If you have only walked in the dark, you will have never known the clarity that light brings. – me
This post is the fourth in a seven part series covering my seven tenets of software testing.
I was giving a presentation once to the CEO of the company that I worked for about the current state of play within our our organisation. I was Development Manager at the time, so testing was not my primary focus. During the presentaiton, I couldn’t resist including a couple of testing related slides. The first slide showed an example defect trend graph, which I used to illustrate the sort of information that should be generated by the Test Manager to assist with the day to day decisions. The second slide was the same graph with the data removed so that only the two axes remained, illustrating the lack of information available when there aren’t any testers logging issues.
Steve McConnell used a brilliant analogy in Code Complete1, where he compares testing to a bathroom scale when you are trying to loose weight. Steve (or should that be Mr. McConnell) states that the scale does not help you loose weight at all. The scale is merely an indicator of your progress towards your goal.
In my way of thinking, to extend Steve’s analogy, a test team is more like a weight loss clinic. The statistics and metrics that they produce are like the weekly weigh in, and blood test results that tell the real story of how you are progressing.
Government health warning: Metrics can be addictive
I don’t smoke, but testing metrics are like a cigarette habit, once you are used to having them, it is almost impossible to give them up. You may be able to go for short, painful, stints without them, but you know it is a case of when they will be back, not if.
Metrics can provide insights and answers to curly questions such as: When will the product ship? The simple answer is to average the number of bugs fixed per day, and divide the total number of bugs by the average. That is approximately how many days until you reach zero bugs. So, if you are fixing 5 bugs per day and you have 200 active bugs, the earliest that you will ship is in 40 working days time. If you want to ship sooner, you will need to stop adding features and focus on fixing more bugs. The same information can be used in reverse to calculate a maximum allowable bug count. Say you only have 40 days until your desired ship date, and you are fixing 5 bugs per day as in the previous example. If you active bug count is over 200 today, you will probably miss your target. This number continuously decreases so in 2 weeks time, with 30 working days to go, your bug count should be at the 150 mark if you are going to hit your ship date.
Interpreting the results sometimes makes you feel more like a statistician than a test manager. But trust me, it is well worth the effort.
1 Steve McConnell 1993, Code Complete, Microsoft Press.
This post is the third in a seven part series covering my seven tenets of software testing.
To start off, I’ll give credit where credit is due. I first came across this tenet in Microsoft Secrets1 some years ago. Whilst the book is starting to show it’s age these days, it is full of some great little gems of information, and in the past I have made Chapter 5: “Developing and shipping products” required reading for members of my team.
The key idea behind the tenet is that testing starts the day development starts. This is a conscious move away from the waterfall approach where testers don’t get to start their testing until the developers have hit code complete. Starting testing so late in the process creates a situation where the true state of the product only becomes visible in the last third of the project or so. I don’t know about you, but if something is going off the rails, I want to know about it as soon as possible, so I can take some corrective actions before things get really ugly, and expensive to fix.
There are several techniques that can be utilised to start testing earlier in the process, adding significant value to the project.
In a perfect world where budgetary constraints don’t exist, (like say for the testers of the computers on Star Trek), a testing “buddy” is assigned to each and every developer. At the end of every day, the developer submits their code and hands over a private release to their testing buddy. The buddy tests the newly crafted code in its semifinished state, and provides immediate feedback to the developer, rectifying any issues before the code is integrated into the main build.
This practice is apparently in wide use throughout Microsoft, and I am led to believe that the ratio of developers to testers in times past was approximately 1:1. However the ration may be more or less these days depending on the product, the quality bar and the amount of automation that is being used.
Well I don’t work for Microsoft, and this ain’t Star Trek, so how can the rest of us utilise this technique?
There are a couple of ways that this approach can be adopted in the absence of unlimited testing resources. Firstly, a tester may be allocated to a number of developers, say, an entire feature team, and they test the feature as it develops, instead of only Joe’s code.
In the complete absence of testers, a developer could pair up with another developer who has sufficient objectivity and emotional detachment from the code that they are testing. (Typically this would need to be a developer working on a completely different feature). To encourage the buddy testing practice, issues found as part of a private release, won’t be entered into the defect tracking system, allowing the developer to resolve the issues as quickly as possible.
Test Driven Development (TDD) and developer unit testing
In the last couple of years, TDD and the nUnit style test harnesses have changed the unit testing landscape. nUnit formalises and automates the unit testing techniques that the better developers were doing in times past. This style of testing is a great technique to improve the quality of the code, and definitely should be utilised in one form or another.
The challenge however, is the developers emotional attachment with their code, Particularly when it comes to performing negative (destructive) tests. As a development / testing professional I can only say that nUnit based unit tests are a great thing, but, they are no replacement for someone who has no emotional attachment to the code, pounding away at it. This becomes particularly important as API only testing becomes less and less effective, (finding fewer and fewer bugs) as the product matures.
Daily build and smoke tests
Discussed in the previous tenet, the daily build and smoke test is a key foundation process for a development team that is serious about producing quality code. This practice that should always be implemented if at all possible.
Whilst the daily build and smoke test is great at identifying when something is broken, the technique has a fundamental flaw in that the smoke test will not prevent the breakage from occurring in the first place. If a developer performs a smoke test after they do a local build, but before they submit their code then the problem may be caught before the main branch is broken. The challenge with pre-checkin tests is that they can significantly increase the amount of time that a developer will spend submitting their code. You can expect a lot of resistance from developers for this type of process. Especially if they are used to working on a small team and just checking in to VSS whenever they like. If your developers are used to following a controlled check in process that becomes necessary on larger projects, this should be easier to implement.
Performance testing and application profiling
Application performance is almost always an issue, and the judicious use of a profiler early on can help identify issues that may come home to roost later. Also just stepping though your code in the debugger can provide some valuable insight where time is being spent, although this becomes harder and harder the larger your code base becomes.
Code reviews are another technique that can significantly improve the quality of software that is being developed. Code reviews can vary from a quick informal review to a full blown inspection. The costs and results will vary along with the formality, but at least some form of review should be scheduled during the development process.
Overall there are a number of different techniques that can be applied to an application as it is being built, and judicious use of the resources that are available can improve the quality of software, from the start of the development cycle.
1 Michael A. Cusumano and Richard W. Selby 1995, Microsoft Secrets, pg. 294, Harper Collins.
- Boundaries are explicit
- Services are autonomous
- Services share schema and contract, not class
- Service compatibility is determined based on policy
That inspired me to develop my own list of guiding principles that apply to software testing. These tenets are documenting some key learning’s from over the years working as a Test Manager, Senior Consultant and Development Manager for various software development shops.
- You can’t test everything so you have to focus on what is important.
- If you are going to run a test more than once, it should be automated.
- Test the product continuously as you build it.
- Base your decisions on data and metrics, not intuition and opinion.
- To build it, you have to break it.
- Apart from Test-Driven Development, A developer should never test their own software.
- A test is successful when the software under test fails.