Structured Testcase Design

download Structured Testcase Design

If you can't read please download the document

description

Structured Testcase Design: This document is very useful for all Software Testing Professionals. You can get clear concept of Software Testing / Quality Assurance.Thanks,Kapil Samadhiya

Transcript of Structured Testcase Design

Structured Testcase DesignS. Vasant Member of Technical Staff Cadence Design Systems ABSTRACT A structured approach is required in practically all aspects of Software Testing. A methodical approach to Testcase design involves a degree of planning, considerations of ease of execution and maintainability. The paper Structured Testcase Design focuses on the implementation and structure of the overall test framework and how testcases should be designed. It also attempts to share the experiences of the authors and what they consider to be the "best practices" that have been identified in this domain. It addresses issues such as scalability, modularity, version-control, robustness, portability and documentation of testcases. Abhishek Datta Member of Technical Staff Cadence Design Systems

1 IntroductionSoftware today is getting more and more complex. The traditional ad hoc approach to testing is rapidly being seen as neither efficient nor effective enough to handle this increasing complexity. ScopeIt does not discuss what the intent of a testcase should be. A high-level picture of the Test Case Design Process is shown in Fig 2. (See appendix) as a flowchart. We will not elaborate on the process details and instead concentrate on the structural aspects of testcase design. It is observed however, that though the paper is not process-centric, the process is integral to the testcase design activity and familiarity with it is assumed.

1

2 Test Case Design ConsiderationsThis section discusses design aspects that should help in better structuring of testcases.

2.1

Modularitya

It should be understood while designing tests that the Application Under Test (AUT) will undergo

series of radical changes and mysterious transformations in the course of it's life cycle. Even if that does not appear to be the case in the present it is a fair assumption to make of the future. Changes in the AUT test environment, platform et al, place considerable stress on a testcase unit to evolve and change correspondingly. Deciding on a modular design can help reduce the angst of test maintenance. So what is a modular testcase design? It is usually possible to break up a testcase into component parts. One could do this by asking questions of the kind: What is my test harness viz. the test execution framework? What is my "golden" data? What are my test sources? What are the setup scripts, if any?

The answers to these questions should allow the testcase operation to be broken up into functionally distinct parts. The components are then analyzed for 2.1.1 Invariance

Does a component of your test change across Other tests in the testsuite? Other platforms? With revisions in the functionality of the AUT?

2

The invariant parts will typically include those components that make up the test harness e.g. tool setup and result verification tool(s). 2.1.2 Parameterization

To what degree is it possible to parameterize certain attributes of these components? The benefits of doing the above analysis are manifold Once it is known that certain components of a testcase are invariant across the testsuite then these can be collapsed into a common set and placed at a central location. Those elements that have been deemed as parameters can be set at the same central location. This greatly facilitates test maintenance. The parameterization of testcase elements and abstraction of a common set of test features 'forces' a consistent testcase design. Testcases are typically added in increments corresponding to the delta changes in the AUT over a period of time, possibly by different individuals. It becomes important to have a consistent test design to ease the understanding of the testcase structure and debugging test failures in regression testing. It is all very well to advocate a standard testcase design in Process Guideline Documents but it is generally seen that when the overall test framework dictates a set of parameters for a testcase to plug-in and execute correctly then the testcases are created in much more consistent manner. Characterizing the testcase in terms of discrete functional units helps in classifying which portions of the testcase need to be under version control and which don't. This is described in a subsequent section on Version Control. It is seen that designing in terms of parameterized modules increases the overall scalability of the test-framework. Once the interfaces between the test-harness and the tests are clear, it is easier to add new tests and change the test harness itself without affecting the functioning of the existing tests.

3

However, there is a design decision that needs to be taken when extracting the common set from multiple testcases. It is advantageous at times to have no external dependencies in a testcase. This makes for a more independent and flexible testcase but suffers on all the counts listed above. It is suggested that all external dependencies should be centralized and well documented to minimize the trade-off.

2.2

Version Control

Any software product with a realistic life span undergoes a number of revisions. The test-suites associated with the software are required to do the same. If one had to support the past versions of the software, as most of us have to, this would require maintaining multiple versions of the test-suites as well. However, the overhead of maintaining multiple versions of testware can be minimized by placing the test sources under version control. Which components are to be placed under version control? The snapshot of a generic testcase during execution will typically have the following components The Test Sources (Design Files, Source Files, Input vectors) The Golden Data (Expected Run Results, Output Vectors) The Test Environment Setup/ Execution scripts if any. Intermediate files/entities created during the test-run

Of the above elements, the first two must be under version control as these directly characterize the evolution of the AUT over release cycles. The Test Environment setup scripts should be placed under version control as well. This would allow one to recreate the test environment for a particular release. The entities created/ modified during a test run should not be checked in. What are the benefits? All the benefits that are traditionally associated with placing design sources under version control can be obtained for testware as well. To enumerate a couple -

4

Recreating the state of the test-suites corresponding to a particular version of the AUT becomes trivial.

Most version control software provide support for 'branch-and-merge' development. This means that one could maintain a regression branch, which would consist of tests that are considered stable and a development branch where tests for new features of the AUT are

created. The development branch would be merged to the regression branch at a point where the AUT and correspondingly the new tests are considered stable. However, the key to placing testcases under version-control is the clean partitioning of the testsources from the intermediate files. This is where a modular approach to test-design will help in the long-term.

2.3

Maintainability

It defines the ease with which the testcases could be maintained throughout the lifecycle of the product across multiple platforms. Almost every software product evolves continuously from its conception till its end of life. So the testcases will also undergo changes to mirror the changes in the AUT. Maintainability, in other words, means the ease with which the testcase could be modified when the AUT changes in course of its lifecycle. Structure of a test greatly determines its maintainability. Every testcase built on the lines listed in the modularity and version-control sections would qualify as a well maintainable testsuite.

There are certain guidelines that should be adhered to during the test creation phase, to make testcases easily maintainable 2.3.1 Documentation

Maintaining an undocumented testcase is a nightmare even for the creator of the testcase. The habit of documenting every component of both the test harness and the individual testcases should be inculcated. At the very least, there should be a "README" file explaining the intent of the testcase.

5

This file should have references to the Test Plan document, as applicable. It is highly recommended to maintain a revision history for every change that is made to the components of the testcase(s). This would help in tracking the problems to their sources. 2.3.2 Conventions

Test Engineers, involved in test case creation, should follow common conventions. These could be a. File naming conventions c. Version control schemes e. Parameterization 2.3.3 Version Control Scheme b. Automation schemes d. Test characterization schemes

Quality Assurance processes require that regression testing be performed before any software product is released to the customers. This would be possible only if separate test case hierarchies are maintained for every version of the software. Maintaining multiple copies of the test case hierarchy is inefficient. A version control system that provides support for multiple branches simplifies testcase development and maintenance. Each branch could be labeled on the basis of the version of the software it tests. New tests added to address new functionality/ features in subsequent releases of the software need be checked-in to the corresponding branches only. 2.3.4 Configuration Items for Version Control

Any file that is created during the test run should not be "checked-in" to the Version Control system. Typical examples are run logs and intermediate files. These intermediate files are specific to that execution and should be removed after completion of the test. The presence of these intermediate components in the versioning system would hamper the regression testing (intermediate file formats may change while the end result of the test is invariant). It also increases the redundancy in the test repository. So we recommend that cleaning up of the test case directory should be a part of the test run, preferably before and after the test execution.

6

2.3.5

External Dependencies

There should be minimal external dependencies for any testcase. By external dependencies we mean any component of the testcase or the test harness that lies outside the testcase repository e.g. references to data lying on the temporary storage location/ references to remote-mounted resources.

2.4

Portability

It defines the ease with which the testcases could be ported to a new platform. Usually the test creation effort is focussed on one platform. Later the testcases are ported to work on other "supported" platforms. During the planning stage, frequently, only the known platforms are considered. The real challenge is when the support for new Operating Systems (OS) comes up in a subsequent release of the software. If a modular approach is followed while creating tests, then porting to a new platform becomes that much easier.

Modular testcase design approach needs to be applied to partition the platform specific and the platform independent components. This raises the questions What are platform specific components?

The components of the testcase that vary across different platforms are what we term as platform specific components. Any component that interacts directly with the OS, falls under the platform specific component list. The following are typically platform specific Setup scripts used for setting up the test environment Automation utilities Utilities that are employed for post processing and results verification Intermediate files created during the test execution

What are platform independent components?

7

The components of the testcases that do not vary across different platforms are what we term as platform independent components. It is primarily the data that is used/ produced by the AUT, which is expected to be consistent across all supported platforms. This could be a. Source files b. Input vectors to the AUT c. "Golden data" i.e. expected results & output vectors What is the benefit of this partitioning?

Benefits of partitioning the platform specific testcase components are Redundancy Removal

By partitioning out the platform specific components from the testcase, we get a subset of components, which could be used commonly across various platforms. This avoids the redundancy of maintaining a separate testcase hierarchy for each platform. Scalability

We create placeholders for any platform specific data. Any platform specific data identified in the partitioning process would go into these placeholders. In the process, we make the testcases scalable for execution on any new platform that a future version of the software may require. Most of the portability issues could be avoided, if the following guidelines are adhered to during the testcase creation phase 1. File naming conventions should be consistent across all platforms. Any platform dependent file should be identifiable from its name i.e. file names could have a suffix that reflects the platform on which it is dependent. Certain platforms, like Windows NT, associate special meaning to the files based on their extensions. Care needs to be taken while choosing a convention for naming the platform independent files. 2. Testcases depend on scripts and automation utilities, which are generally available on all platforms.

8

3. Environment settings should be parameterized and should require a "one point" change while porting. 4. Testcases should be self-contained entities. There should be minimal external path dependencies. Usage of hard coded paths to reference any external dependencies should be avoided. Relative paths should be used if hard coding is unavoidable. Portability of the testcases across various platforms could be enhanced if the AUT supports some inbuilt testability features. For example, if the AUT has an in-built command language interface, this could be used to automate the testcases without any platform specific dependencies.

The availability of the version control system software on various platforms affects portability. Non availability of the version control system on a platform is a potential threat to the test hierarchy maintenance on that platform. In such cases, a snapshot of the test hierarchy could be made available outside the version control system. This snapshot could be used for the testing on that platform. The snapshot test hierarchy then needs to be manually maintained to keep the testsuites synchronized. This approach is semi-automatic and prone to errors.

3 Our ExperienceThe Elements of the Structured Testcase Design could be explained with an example. The modular approach followed by the authors during the creation of testcases for 'NC-Verilog Simulator' is presented in this section.

9

The skeletal structure shown above (figure 1) depicts a broad picture of the NC-Verilog (AUT) test hierarchy. Makefiles are used for automating the testcases.

3.1

Modularity

The common elements from the leaf-level testcases are abstracted out and placed at the top level. The header files, which are common to every testcase, are placed in the "include" directory. Makefiles in

10

the leaf-level testcases are parameterized to a great extent. The parameters used in these Makefiles are defined in the "Makefile_root" file in the top-level "etc/" directory.

3.2

Version Control

The "tests/" directory has the leaf-level testcases; "etc/", "include/" directories have common test data that needs to be maintained for the entire life cycle of the AUT. Even the "README" file and "exe/" directory, apart from the above mentioned common test data, need to be under version control.

3.3

Maintainability

The "README" file at the top has the summary of the test hierarchy. It also discusses how to set the environment for executing the testcases. The "README" files in leaf-level test directories are used to describe the intent of the testcases. Revision history of the changes made to any of the test components is maintained to make tracking easier. The whole test hierarchy excluding the "scripts/" directory is maintained under Clearcase Version Control System. The tests are available on multiple branches with a unique branch corresponding to every AUT release. The Branch and Merge features of Clearcase help us in maintaining these testcases across multiple releases on four different platforms. Since the test components are highly parameterized, any change that affects all the testcases could be implemented by making the necessary change in the top level "Makefile_root" file.

3.4

Portability

When these testcases were created, NC-Verilog was only supported on Solaris, HP and Windows platforms. We identified the platform specific data and moved that into the "exe/" directory. There are platform specific Makefiles (suffixed with a unique platform identifier), which would be used for executing the platform specific part of the tests. When we were required to port the same testcases to Linux, it was easily done by adding a Linux specific Makefile.

4 ConclusionsOur experience with this approach to Structured Testcase Design has been very positive. This approach facilitates test development, execution and maintenance. It allows the Test Engineer to

11

create new tests much faster. The Test Engineer can focus more on what to test rather than worry about implementation details. Uniformity in test structure allows automation of tests. It also helps when testcases designed by one person have to be maintained by another. These factors contribute to reduced cycle times in test development and maintenance, thereby increasing the productivity of the test team overall.

12

APPENDIX

13

14