Custom Search

Test Automation

Test Automation


Often when a test automation tool is introduced to a project, the expectations for the return on investment are very high. Project members anticipate that the tool will immediately narrow down the scope for software testing effort, meaning reducing cost and schedule. However, I have seen several implementations of test automation are fail - miserably.


The following very simple factors largely influence the effectiveness of automated testing, and if not taken into account, the results is usually a lot of lost effort, and very expensive shelfware.

Scope - It is not practical to try to automate every test, nor is there the time available generally. Pick very carefully the functions/areas of the application that are to be automated.

Preparation Timeframe - The preparation time for automated test scripts has to be taken into account. In general, the preparation time for automated test scripts can be up to 2/3 times longer than for manual testing. In reality, chances are that initially the tool will actually increase the testing scope. It is therefore very important to manage expectations. An automated testing tool does not replace manual testing, nor does it replace the test engineer. Initially, the test effort will increase, but when test automation is done correctly it will decrease on subsequent releases.

Return on Investment - Because the preparation time for test automation is so long, I have heard it stated that the benefit of the test automation only begins to occur after approximately the third time the tests have been run.

When is the benefit to be gained? Choose your objectives wisely, and seriously think about when & where the benefit is to be gained. If your application is significantly changing regularly, forget about test automation - you will spend so much time updating your scripts that you will not reap many benefits. However, if only disparate sections of the application are changing, or the changes are minor - or if there is a specific section that is not changing, you may still be able to successfully utilise automated tests. Bear in mind that you may only ever be able to do a complete automated test run when your application is almost ready for release i.e. nearly fully tested!! If your application is very buggy, then the likelihood is that you will not be able to run a complete suite of automated tests due to the failing functions encountered.

The Degree of Change: The best use of test automation is for regression testing, whereby you use automated tests to ensure that pre-existing functions (e.g. functions from version 1.0 - i.e. not new functions in this release) are unaffected by any changes introduced in version 1.1. And, since proper test automation planning requires that the test scripts are designed so that they are not totally invalidated by a simple gui change (such as renaming or moving a particular control), you need to take into account the time and effort required to update the scripts. For example, if your application is significantly changing, the scripts from version 1.0. may need to be completely re-written for version 1.1., and the effort involved may be at most prohibitive, at least not taken into account! However, if only disparate sections of the application are changing, or the changes are minor, you should be able to successfully utilise automated tests to regress these areas.

Test Integrity - how do you know (measure) whether a test passed or failed ? Just because the tool returns a pass does not necessarily mean that the test itself passed. For example, just because no error message appears does not mean that the next step in the script successfully completed. This needs to be taken into account when specifying test script fail/pass criteria.

Test Independence - Test independence must be built in so that a failure in the first test case won't cause a domino effect and either prevent, or cause to fail, the rest of the test scripts in that test suite. However, in practice this is very difficult to achieve.

Debugging or "testing" of the actual test scripts themselves - time must be allowed for this, and to prove the integrity of the tests themselves.

Record & Playback - DO NOT RELY on record & playback as the SOLE means to generates a script. The idea is great. You execute the test manually while the test tool sits in the background and remembers what you do. It then generates a script that you can run to re-execute the test. It's a great idea - that rarely works (and proves very little).

Maintenance of Scripts - Finally, there is a high maintenance overhead for automated test scripts - they have to be continuously kept up to date, otherwise you will end up abandoning hundreds of hours work because there has been too many changes to an application to make modifying the test script worthwhile. As a result, it is important that the documentation of the test scripts is kept up to date also.

No comments: