Building a GUI test automation framework: Design goals

Good software design is all about trade-offs. If you want your application to be portable across operating systems, you may have to sacrifice performance. If you want your application to be secure, it may not be as user friendly as an unsecured version. When designing test automation, these types of trade-offs are no different. In my experience, for automation, there are two key design goals.

  • Maintainability. Automated tests must be easy to maintain. Your tests will need to change, and a good automation framework should support and facilitate change, not restrict it. If your tests are not easy to maintain, they will quickly become a time consuming burden.
  • Performance. Performance matters when it comes to automated tests. The tests that you write will be executed over and over again and if you can reduce each test by a couple of seconds, it starts to add up pretty quickly.

Along with goals, there are some things that we definitely don’t want to try and achieve. I don’t particularly like the term, but I’ll call them non-goals.

  • Portability. Portability could be a goal if you wished. Typically, automation will be built using one automated tool, on a single platform. I say platform, as opposed to an operating system because a common scenario will be running the same test on different operating systems, for example Windows XP and Windows Server 2003. If you were building something like the .Net Common Language Runtime (CLR), portability would be a goal. Particularly, if you wanted to run the same test on the regular CLR, the Compact Framework on a handheld, and Rotor running on Mac OS X and Free BSD. In the case of the CLR however, you wouldn’t be using a GUI testing tool.
  • Language independence. Oh I wish this could be a design goal. Realistically though, your tests will all have to be written in one language, either VBA or some form of “vendorscript”, such as WinRunner’s Test Script Language (TSL).

High Level Design Our design will be loosely based on nunit as follows:

A setup method will be called at the start of every test to cleanup if a previous test has failed, and then launch the application we are testing.

A teardown method that is called at the completion of a test. Teardown determines if the test has passed or failed, writes the test result and then shuts down the application we are testing.

A number of assert methods that will do our verification, and pass or fail a test appropriately.

In a previous post Assert me! I included a comment about how most test tools mark tests as passed by default.

As you can see this is not marked as a failure or not run, it is recorded as a pass. This is a common error that all automated test tool developers tend to make. The default result for any automated test should be to fail (or some other non-passing state), unless explicitly passed during test execution.

We will endeavour to change the default test result behaviour along the way.

Automation Example Testing0 comments