Testing Tool Evaluation Criteria

8
AUTOMATED TEST TOOLS EVALUATION CRITERIA Terry Horwath Version 1.02 (1/18/07)

description

 

Transcript of Testing Tool Evaluation Criteria

Page 1: Testing Tool Evaluation Criteria

AUTOMATED TEST TOOLS EVALUATION CRITERIA

Terry Horwath Version 1.02 (1/18/07)

Page 2: Testing Tool Evaluation Criteria

Automated Test Tools Evaluation Criteria Version 1.02 (1/18/07)

Table of Contents

1. INTRODUCTION 1 1.1 Author’s Background ...........................................................................................................1 1.2 Allocate Reasonable Resources and Talent..........................................................................1 1.3 Establish Reasonable Expectations ......................................................................................2

2. RECOMMENDED EVALUATION CRITERIA 3 2.1 GUI Object Recognition.......................................................................................................3 2.2 Platform Support ..................................................................................................................3 2.3 Recording Browser Objects..................................................................................................3 2.4 Cross-browser Playback .......................................................................................................3 2.5 Recording Java Objects ........................................................................................................4 2.6 Java Playback .......................................................................................................................4 2.7 Visual Testcase Recording ...................................................................................................4 2.8 Scripting Language...............................................................................................................4 2.9 Recovery System..................................................................................................................4 2.10 Custom Objects ....................................................................................................................5 2.11 Technical Support.................................................................................................................5 2.12 Internationalization Support .................................................................................................5 2.13 Reports..................................................................................................................................5 2.14 Training & Hiring Issues ......................................................................................................5 2.15 Multiple Test Suite Execution ..............................................................................................5 2.16 Testcase Management...........................................................................................................6 2.17 Debugging Support...............................................................................................................6 2.18 User Audience ......................................................................................................................6

ii Terry Horwath

Page 3: Testing Tool Evaluation Criteria

Automated Test Tools Evaluation Criteria Version 1.02 (1/18/07)

1. INTRODUCTION This document provides a list of evaluation criteria which has proven useful to me when evaluating automated test tools like Mercury Interactive’s QuickTest Professional, WinRunner and Segue’s Silk over the last several years for a variety of clients. Hopefully some readers will find this information useful, such that it reduces your evaluation effort.

The specific criteria used for each project differs based on the client’s:

• testing environment, and • test engineers’ programming backgrounds and skill sets, and • type of software being tested [especially the software develop tool, such as Visual Basic,

PowerBuilder, Java, browser based applications, etc.], and • application(s) testing requirements.

The remainder of this chapter provides a variety of miscellaneous thoughts I have on automating the testing process, while Chapter 2 contains my list of potential evaluation criteria. Note that some of the Chapter 2 evaluation criteria is Java and web application testing oriented. Substitute your application development tool—for example Visual Basic or PowerBuilder—in the Java related evaluation criteria items.

1.1 Author’s Background I have designed custom frameworks as well as hundreds of test cases using Silk/QaPartner from 1994 (version 1.0) through 2004 (version 5.5). with WinRunner (version 5) and Test Director in 1999 and 2000 and with QuickTest Professional since 2006 (versions 8 and 9).

1.2 Allocate Reasonable Resources and Talent Most software testing projects do not fail because of the selected test tools—virtually all of top automated testing tools on the market can be used to do an adequate job, even when the test tool is not well matched with the software development environment. Rather I believe that most failures are due to a combination of the following reasons:

1. Test engineers fail to treat the effort to develop a large number of complex test cases and test suites as a large software development project—it is crucial to apply good software development methodology to produce a test product, which includes defining requirements, developing a schedule, implementing each test suite using a shared custom framework of well known libraries and guidelines, as well as using a software version control system.

2. Sufficient manpower and time are not allocated early enough in the application development cycle. Along with incomplete testing this also leads to the phenomenon of test automation targeted for use with Release N actually being delivered and used with Release N+1.

3. Test technicians with improper skills are assigned to use these automated test tools. Users of these tools must have strong test mentalities and in all but a few situations they must also possess solid programming skills with the automation tool’s scripting language.

1 Terry Horwath

Page 4: Testing Tool Evaluation Criteria

Automated Test Tools Evaluation Criteria Version 1.02 (1/18/07)

1.3 Establish Reasonable Expectations Through their promotional literature automated test tool vendors often establish unrealistic expectations in one or more of the following areas:

• What application features and functions can truly be tested with the tool. • The skill level required to effectively use the tool. • How useful the tool’s automatic recording capabilities are. • How quickly effective testcases can be produced.

This is unfortunate because in the hands of test engineers possessing the proper skill set all of the top automated test tools can be used to test significant portions of virtually any GUI-centric application. Use the following assumptions when reviewing this document and planning your evaluation effort:

1. Even when a test tool is well matched with a software development tool, the test tool will still only be able to recognize a subset of the application’s objects—windows, buttons, controls, etc.—without taking special programming actions. This subset will be large when the development engineers create window objects using the development tool’s standard class libraries. The related issue of cross-browser playback also rears it head when testing web applications.

2. If the test engineer wants to unleash the full power of the test tool they will need to have, or develop, solid programming skills with the tool’s scripting language.

3. With few exceptions recording utilities—those tools which capture user interaction and insert validation functions—are only effective in roughing out a testcase. Thereafter captured sequences will most often need be cleaned up and/or generalized using the scripting language.

4. If an application has functionality which can’t be tested through the GUI you will need to: (a) use the tool’s ability to interface to DLLs—for Windows based applications; (b) use its SDK (software developer kit) or API if it supports one of these mechanisms; (c) use optional tools—at an additional cost—offered by the test tool vendor; (d) use other 3rd party non-GUI test tools more appropriate to the testing task.

5. If you are currently manually testing the application to be automated you will need to initially increase the size of the test team by a minimum of 1 or 2 test engineers—who possess good programming backgrounds. After a significant portion of testcases have been written and debugged you can start removing some of the manual test engineers. Pay back comes at the end of the automation effort, not during the initial implementation.

6. If the test team does not contain at least one member previously involved with automating the test process, coming up to speed is no small task—no matter which tool is selected. Budget dollars and time for training classes and consulting offered by the tool vendor to get your test team up and running.

7. Budget 80 hours of time to do a detailed evaluation of each vendor’s automated test tool against your selected evaluation criteria, using one of your applications. While you might initially recoil from this significant investment in time, keep in mind that the selected tool will likely be part of your department’s testing effort for many years—selecting the wrong tool will reduce productivity many times over 80 hours.

2 Terry Horwath

Page 5: Testing Tool Evaluation Criteria

Automated Test Tools Evaluation Criteria Version 1.02 (1/18/07)

2. RECOMMENDED EVALUATION CRITERIA

2.1 GUI Object Recognition Does the tool:

(a) Provide the ability to record each object in a window—or on a browser page—such that a logical object identifier, used in the script, is definable independent of the operating system dependent property [or properties] used by the tool to access that object at runtime.

(b) (1) Provide the ability to associate (i.e. map) the logical object identifier with more than one operating system dependent property. And, (2) does the tool offer some technique to support a property definition technique which supports internationalization [if language localization is a testing requirement]?

(c) provide the ability to record—and deal effectively with—dynamically generated objects [often encountered when testing web applications].

2.2 Platform Support Are all of the required platforms [i.e. NT 4.0, Windows XP, Windows Vista, etc.] supported for:

(a) testcase playback? (b) testcase recording? (c) testcase development [programming without recording support]?

2.3 Recording Browser Objects Does the tool provide the ability to record against web applications under test, correctly recognizing all browser page HTML objects, using the following browsers:

(a) IE7? (b) IE6? (c) Firefox?

2.4 Cross-browser Playback Does the tool provide the ability to reliably and repeatedly playback test scripts against the browsers which were not used during object capture and testcase creation, with little or no:

(a) Changes to the GUI map (WinRunner), GUI declarations (Silk) or the equivalent in other tools?

(b) Changes to testcase code?

(c) Does the tool provide some type of generic capability [without using sleep–like commands in the code] to deal with “browser not ready” to correctly synchronize code execution—such as an access to a web page over a slow internet connection?

3 Terry Horwath

Page 6: Testing Tool Evaluation Criteria

Automated Test Tools Evaluation Criteria Version 1.02 (1/18/07)

2.5 Recording Java Objects Does the tool:

(a) Provide the ability to record objects against, and see all standard Swing, AWT and JFC 1.1.8 and 1.2 objects, when running the Java application under test?

(b) Provide the ability to record objects against [and interact with] non-standard Java classes required by the Java application under test (i.e. for example the KLGroup’s 3rd party controls, when the application under test uses that 3rd party toolset)?

(c) Require that the platform’s static classpath environment variable be set with tool specific classes, or can this be set within the tool on a test suite by test suite basis?

2.6 Java Playback Does the tool:

(a) Reliably and repeatedly play back the evaluation testcases?

(b) Provide some type of generic capability [without using sleep –like commands in the code] to deal with “application not ready” to correctly synchronize code execution? [This may or may not be an issue, depending on the application being tested].

2.7 Visual Testcase Recording Does the tool:

(a) Provide the ability to visually record testcases by interacting with the application under test as a real user would?

(b) Provide the ability, while visually recording a testcase, to interactively insert—without resorting to programming—validation statements?

(c) Provide the ability, while interactively inserting a validation statements, to visually/interactively select validation properties (i.e. contents of a text field, focus on a control, control enabled, etc.)?

2.8 Scripting Language Is the test tool’s underlying scripting language:

(a) object-oriented? (b) Proprietary?

2.9 Recovery System Does the tool support some type of built-in "recovery" system, which the programmer can control/define, that drives the application under test back to a know state? (Especially in the case where modal dialogs were left open when a testcase failure occurred)?

4 Terry Horwath

Page 7: Testing Tool Evaluation Criteria

Automated Test Tools Evaluation Criteria Version 1.02 (1/18/07)

2.10 Custom Objects What capabilities does the tool provide to deal with unrecognized objects in a window or on a browser page? [Spend a fair amount of time evaluating this capability, as it is quite important].

2.11 Technical Support What was the quality and timeliness of technical support received during product evaluation? [Remember—it won’t get any better after you purchase the product, but it might get worse].

2.12 Internationalization Support Evaluate the support for internationalization [also referred to as language localization] in the following areas [if this is a testing requirement]:

(a) Object recognition (b) Object content (such as text fields, text labels, etc.). (c) Evaluate and highlight any built–in or add-on multi–language support offered by the vendor.

2.13 Reports What type of reporting and logging capabilities does the tool provide?

2.14 Training & Hiring Issues (a) What is your [not the vendor’s] estimated learning curve to be competent (i.e. can write

useful test scripts which may need to be rewritten later);

(b) What is your estimated learning curve to become an skilled (i.e. can write test scripts which rarely need to be rewritten).

(c) What is your estimated learning curve to become an expert (i.e. can design frameworks).

(d) What is your estimated availability of potential (i) employees, and (ii) expert consultants, skilled with this tool in your geographic area.

2.15 Multiple Test Suite Execution (a) Can multiple test suites be driven "completely" from the tool [or from a command line

interface] thereby allowing X number of unrelated suites/projects to be executed under a cron-like job or shell? (For true unattended operation).

(b) …including the ability to save the results log, as text, prior to or during termination/exit?

(c) …including the ability to return a reliable pass/fail status on termination/exit?

5 Terry Horwath

Page 8: Testing Tool Evaluation Criteria

Automated Test Tools Evaluation Criteria Version 1.02 (1/18/07)

2.16 Testcase Management Does the tool support some type of test case management facility (either built-in or as an add-on) that allows each test engineer to execute any combination of tests out of the full test suite for a given project? How difficult is it to integrate manual testing results with automated test results?

2.17 Debugging Support What type of debugging capabilities does the tool support to help isolate scripting and/or runtime errors?

2.18 User Audience Which of the following groups of users does the tool primarily target?

• Test technicians possess good test mentalities, but often lack much if any background in programming or software development methodologies. They are the backbone of many test groups and have often spent years developing and executing manual testcases.

• Test developers possess all of the test technician’s skill set, plus they have had some formal training in programming and limited experience working with on a software development project and/or automated testcases.

• Test architects possess all of the test developer’s skill set, plus they have had many years of experience developing and maintaining automated test cases, as well as experience defining and implementing the test framework under which multiple automated test suites are developed. They are a recognized expert with at least one automated tool.

6 Terry Horwath