Black Box Testing

Click here to load reader

  • date post

    09-Nov-2014
  • Category

    Documents

  • view

    25
  • download

    1

Embed Size (px)

Transcript of Black Box Testing

Software Testing: Black box testingSE 110

SE 110 Spring 2013

Black-box testing Black box testing is done without knowledge of the internals of system under test. It is done from the customers view point and involves looking at the specifications only. It requires a functional knowledge of the product to be tested.

Black box tests are convenient to administer as they use the complete finished product and do not require any knowledge of its construction.SE 110 Spring 2013

Characteristics of black-box testing Done based on requirements. Addresses (should address) stated as well as implied requirements. Encompasses the end-user perspective. Checks for valid and invalid conditions / inputs. May or may not know the technology aspects of the product.SE 110 Spring 2013

Typical errors found in black-box testing Incorrect or missing functionalities Interface errors Errors in data structures/data base access Behavior errors Performance errors Initialization and termination errors

SE 110 Spring 2013

Black-box vs. White-box testingBlack Box Testing Has no access to program code Requires external perspective White Box Testing Has access to program code Requires knowledge of program code

Set of techniques applicable to all other phases of testing

Typically applies only to unit testing, where code is involved

SE 110 Spring 2013

Black-box testing techniques Requirements-based testing Positive and negative testing Boundary value analysis Decision tables Equivalence partitioning State-based testing Compatibility testing User documentation testing Domain testing (leads to ad hoc testing)SE 110 Spring 2013

General format for discussion of techniques

Present some reasoning where applicable. List out one or two examples. Walk through the examples. Summarize the process for using the technique.

SE 110 Spring 2013

Black-box testing techniques Requirements-based testing Positive and negative testing Boundary value analysis Decision tables Equivalence partitioning State-based testing Compatibility testing User documentation testing Domain testing (leads to ad hoc testing)SE 110 Spring 2013

Requirements based testing Done to ensure that all requirements in SRS are tested. Differentiates between implicit and explicit requirements. Review requirements first to ensure they are consistent, correct, complete and testable. Review enables translation of (some of) the implied requirements to stated requirements. A reviewed SRS tabulates requirements, along with a requirements id and a priority. This is the genesis of a Requirements Traceability Matrix (RTM).SE 110 Spring 2013

RTM: ExampleReq. ID DescriptionBR-01 BR-02 BR-03

PriorityHigh High Medium

Test conditions

Test case Phase of IDs testingTest_001 Test_002 Test_003, Test_004, Test_005 Unit, Component Unit, Component Integration

Test condition: Different ways of testing a requirement (different types of mapping) Test case: Different conditions / scenarios for a given requirement Phase of testing helps in schedulingSE 110 Spring 2013

Black-box testing techniques Requirements-based testing Positive and negative testing Boundary value analysis Decision tables Equivalence partitioning State-based testing Compatibility testing User documentation testing Domain testing (leads to ad hoc testing)SE 110 Spring 2013

Positive and negative testing Positive testing is done to check that the product does what it is supposed to. Behaves correctly when given right inputs. Maps to a specific requirement. Coverage is defined better.

Negative testing is done to show that the product does not fail when given unexpected inputs. Tries to break the system. No direct mapping to a specific requirement. Coverage more challenging.SE 110 Spring 2013

Black-box testing techniques Requirements-based testing Positive and negative testing Boundary value analysis Decision tables Equivalence partitioning State-based testing Compatibility testing User documentation testing Domain testing (leads to ad hoc testing)SE 110 Spring 2013

Boundary Value Analysis (BVA) Most defects come up near boundaries Reasons from a white box perspective: Programmers tentativeness in using the right relational operator (< or < = ?) Multiple ways of implementing loops Confusing array subscripts

Reasons from a black box perspective: Unclear requirements Ambiguous or it depends mindset!SE 110 Spring 2013

BVA: Example Database starts with a pre-allocating number of buffers for caching Buffers filled up as needed If full, buffers freed on a FIFO basis

SE 110 Spring 2013

BVA: Examples Look for any kind of gradation or discontinuity in data values that affects computation the discontinuities are the boundary values, requiring thorough testing. Look for any internal limits like limits on resources (like the example of buffers given above). The behavior of the product at these limits should also be the subject of boundary value testing. Also include in the list of boundary values, documented limits on hardware resources. For example, if it is documented that a product will run with minimum 4MB of RAM, make sure you include test cases for the minimum RAM (i.e., 4MB in this case). The examples given above discuss boundary conditions for input data the same analysis needs to be done for output variables also.SE 110 Spring 2013

Black-box testing techniques Requirements-based testing Positive and negative testing Boundary value analysis Decision tables Equivalence partitioning State-based testing Compatibility testing User documentation testing Domain testing (leads to ad hoc testing)SE 110 Spring 2013

Decision tables A programs behavior is characterized by several decision variables. Input and output data can be expressed as Boolean conditions (TRUE, FALSE, DONT CARE).

Each decision variable specifies a Boolean condition. The distinct combinations of these decision variables lead to different scenarios. Each scenario occupies a row in the decision table, and the row also has expected results. One representative data point from each scenario needs to be tested.SE 110 Spring 2013

Decision tables: Example Taxpayers have a choice of either taking a standard deduction (SD) or itemizing their deductions. Various factors determine SD: Single $4750 Married and filing a joint return $9500 Married and filing a separate return $7000 If filer or spouse is 65 years or older and additional SD of $1000 is allowed If filer or spouse is blind an additional SD of $1000 is allowedSE 110 Spring 2013

Decision tables: ExampleStatus Status of spouse---

Age (65)No

Age of spouse

Blind (yes or No)No

Spouse blind

SD amount

Single

---

---

$4750

Married, filing separate returnMarried, filing joint return

Claimed SD

No

---

No

---

$7000

---

Yes

---

No

No

$10,500

SE 110 Spring 2013

Decision tables: Process Identify the decision variables. Identify the possible values of each of the decision variables. Enumerate the combinations of the allowed values of each of the variables. Identify the cases in which values assumed by a variable (or by sets of variables) are immaterial for a given combination of other input variables. Represent such variables by the Dont Care symbol.

For each combination of values of decision variables (appropriately minimized with the Dont Care scenarios), list out the action or expected result. Form a table, listing on each but the last column one decision variable. On the last column, list the action item for the combination of variables in that row (including Dont Cares, as appropriate)

SE 110 Spring 2013

Black-box testing techniques Requirements-based testing Positive and negative testing Boundary value analysis Decision tables Equivalence partitioning State-based testing Compatibility testing User documentation testing Domain testing (leads to ad hoc testing)SE 110 Spring 2013

Equivalence partitioning Generalization of BVA / decision table. Divide the (potentially infinite) set of values into a set of equivalence classes or partitions. One element of the class can act as a representative for the entire class. Results of the test for that one element is extrapolated to all the other elements.SE 110 Spring 2013

Equivalence partitioning: Basic hypothesis Suppose an input domain is divided into equivalence class partitions P1, P2, .. PN. The behavior of the system is IDENTICAL for all inputs belonging to the same partition Pi. From a testing perspective, ONE input from each partition is sufficient for the purpose of testing.SE 110 Spring 2013

Equivalence Classes (EqC) Single-domain Equivalence Classes Equivalence classes are formed on the basis of analysis of only one input domain (e.g., age).

Multi-domain Equivalence Classes Multiple domains (e.g., age and gender) can be used together. The total number of equivalence classes is a Cartesian product of all the individual equivalence classes.SE 110 Spring 2013

Single-Domain EqC: Example Credit hour requirements for students are given as follows: 60 credit hours for M.Tech. 60 credit hours for MS by Research 72 credit hours for Ph.D.

What are the equivalence classes here?0 to 60 credit hours 60 to 72 credit hours More than 72 credit hours M.Tech. MS by Research Ph.D.

SE 110 Spring 2013

Multi-Domain EqC: ExampleprocessMessage ( messageBuffer ) processMessage is the central component of a message processing software. The processor is r