ISTQB Advance Level Exam

download ISTQB Advance Level Exam

of 31

Transcript of ISTQB Advance Level Exam

  • 7/31/2019 ISTQB Advance Level Exam

    1/31

    ISTQB Advanced CTAL Exam Study Guide (Part 1)

    Q. 1: What is Configuration management?

    Software configuration management encompasses the disciplines and techniques of initiating,

    evaluating, and controlling change to software products during and after the development process. Itemphasizes the importance of configuration control in managing software production.

    Configuration management is an integral part of the software development process across all phases of

    the life cycle. It functions as a controlling discipline, enabling changes to be made to existing

    documentation and products in such a way as not to destroy the integrity of the software. Since

    configuration management extends over the life of the product, and since tools, techniques, and

    standards exist solely aimed at its proper execution, configuration management can stand alone as a

    module within a graduate curriculum.

    >

    Q. 2: What are the Requirements for the Success of Configuration Management?

    The key requirement for success of configuration management is the commitment of all levels of

    management to enforcing its use throughout the project lifetime. Configuration management, like other

    parts of software engineering perceived as being tedious, may require some coercion for success. A

    further requirement is the availability of a clearly stated configuration management plan.

    >

    Q. 3: How can we say that Configuration Management is a Cost Saving Tool?

    By helping to maintain product integrity, configuration management reduces overall software

    development costs. Cost savings during a particular phase of the life cycle depend on the depth of

    application of configuration management. For instance, controlling individual source code modules costs

    more than only controlling the fully integrated product, but should result in overall savings due to

    reduction in side effects from individual changes. At this time, however, there are no quantitative

    measures sufficiently well developed to document the cost savings. This is largely because the losses

    due to lack of configuration management do not occur, and thus cannot be measured.

    >

    Q. 4: What are the Requirements for the Success of Configuration Management?

    The key requirement for success of configuration management is the commitment of all levels of

    management to enforcing its use throughout the project lifetime. Configuration management, like other

    parts of software engineering perceived as being tedious, may require some coercion for success. A

    further requirement is the availability of a clearly stated configuration management plan.

  • 7/31/2019 ISTQB Advance Level Exam

    2/31

    >

    Q. 5: What are the Configuration Items?

    A configuration item is a document or artifact explicitly placed under configuration control. The

    minimum number of controlled items in a software project is whatever may be needed to effectively

    maintain and enhance the product. These may include requirements, specification, and design

    documents, source code, test plans, user and maintenance manuals, interface control documents,

    memory maps, and others such as procedural or policy documents. The actual items under control vary

    with the needs of the project, and certain items may be waived at specific points in the life cycle.

    Remember that there are time and cost tradeoffs associated with the number and level of items under

    control.

    >

    Q. 6: How many kinds of discrepancies can be identified in discrepancy reports?

    1) Requirements Errors: This type of discrepancy is an error in the requirements. Either the customer or

    marketing did not fully or clearly express the requirements, or incorrect information was given.

    2) Development Errors: Another type of discrepancy is an error done during development. This means

    that a correct requirement was improperly implemented. Development errors occur between the time

    the requirements are baselined and the time the product is turned over to the customer or to

    marketing.

    3) Violations of Standards: Yet another type of discrepancy is a violation of development standards,

    either the company standard or a customer standard in effect due to contract.

    >

    Q. 7: Describe the types of changes, which can be requested?

    Change requests are treated largely like discrepancy reports. There are three kinds of changes that may

    be requested.

    1) Unimplementable Requirements: One reason for a change request is that a requirement turns out to

    be unimplementable through resource constraints identified by the requester. Another reason is that a

    "bad" implementation makes meeting all requirements impossible.

    2) Enhancements: Enhancements are change requests that involve additional requirements.

    3) Improvements: Improvements are change requests that will improve the product, though not in

    terms of functionality or performance. An example would be a request to rewrite a block of code to

    increase the understandability.

  • 7/31/2019 ISTQB Advance Level Exam

    3/31

    >

    Q. 8: What is the most serious problem encountered during configuration management?

    One of the most serious configuration management problems is that of simultaneous update, when two

    or more programmers are modifying the same portion of code. There is a distinct possibility that one

    persons changes will cancel or distort another persons, thus causing a software failure. Checking out

    code and other documents for modification must be handled by mutual exclusion, either manually or

    automatically (using version control software).

    >

    Q. 9: What is the relationship between quality assurance and the Software Life-cycle?

    The function of Software Quality Assurance interacts to some degree with each phase of every software

    development process. Planning should occur in the initial phases of a software project and shouldaddress the methods and techniques to be used in each phase. A description of every product resulting

    from a phase and the attributes desired of each product should be defined in order to provide a basis for

    objectively identifying satisfactory completion of the phase.

    >

    Q. 10: What are the factors having large impact on the software quality assurance program?

    # Schedule requirements

    # Available budget# Technical complexity of the software product

    # Anticipated size of the software product

    # Relative experience of the labor pool

    # Available resources

    # Contract requirements

  • 7/31/2019 ISTQB Advance Level Exam

    4/31

    ISTQB Advanced CTAL Exam Study Guide (Part 2)

    Q. 11: What are Configuration audits?

    Final acceptance of a software product is frequently based on completing a set of configuration audits.

    These audits ensure that the product has satisfactorily met all of its applicable requirements.

    1) Functional Configuration Audit:The primary purpose of the Functional Configuration Audit is to

    ensure that the product that was tested to demonstrate compliance with contract requirements is

    essentially the same as the product that will be delivered. Conducting software tests frequently takes

    months or even years, during which time the software item being tested may undergo revisions and

    modifications. The Functional Configuration Audit should ensure that none of these revisions adversely

    affects the results of previous tests.

    2) Physical Configuration Audit:The primary purpose of the Physical Configuration Audit is to ensure

    that all of the requirements of the contract have been satisfied, with special emphasis on the

    documentation and data delivery requirements. This audit usually is performed after the Functional

    Configuration Audit has demonstrated that the item functions properly.

    >

    Q. 12: What are the activities covered under software Requirements Phase?

    The activities and products of the software requirements phase should be examined throughout the

    conduct of this phase. This examination should evaluate the following:

    # Software development plan# Software standards and procedures manual

    # Software configuration management plan

    # Software quality program plan

    # Software requirements specification

    # Interface requirements specification

    # Operational concept document

    >

    Q. 13: What are the activities covered under software Preliminary Design phase?

    The activities and products of the software preliminary design phase should be examined throughout

    the conduct of this phase. This examination should consist of the following evaluations:

    # All revised program plans

    # Software top level design document

    # Software test plan

  • 7/31/2019 ISTQB Advance Level Exam

    5/31

    # Operators manual

    # Users manual

    # Diagnostic manual

    # Computer resources integrated support document

    >

    Q. 14: What are the activities covered under software Detailed Design phase?

    The activities and products of the software detailed design phase should be examined throughout the

    conduct of this phase. This examination should consist of the following evaluations:

    # All revised program plans

    # Software detailed design document

    # Interface design document

    # Database design document

    # Software development files

    # Unit test cases

    # Integration test cases

    # Software test description

    # Software programmers manual

    # Firmware support manual

    # All revised manuals

    # Computer resources integrated support document

    >

    Q. 15: What are the activities covered under the software Coding & Unit Testing phase?

    The activities and products of the software coding & unit testing phase should be examined throughout

    the conduct of this phase. This examination should consist of the following evaluations:

    # All revised program plans

    # Source code

    # Object code

    # Software development folders

    # Unit test procedures# Unit test results

    # All revised description documents

    # Integration test procedures

    # Software test procedure

    # All revised manuals

  • 7/31/2019 ISTQB Advance Level Exam

    6/31

    >

    Q. 16: What are the activities covered under the Integration and Testing phase?

    The activities and products of the integration & testing phase should be examined throughout theconduct of this phase. This examination should consist of the following evaluations:

    # All revised program plans

    # Integration test results

    # All revised description documents

    # Revised source code

    # Revised object code

    # Revised software development files

    # Software test procedures

    # All revised manuals

    >

    Q. 17: What are the activities covered under the System Testing phase?

    The activities and products of the system testing phase should be examined throughout the conduct of

    this phase. This examination should consist of the following evaluations:

    # All revised program plans

    # System test report

    # All revised description documents# Revised source code

    # Revised object code

    # Revised software development files

    # Software product specification

    # Version description document

    # All manuals

    >

    Q. 18: What are the various types of evaluations for software product?

    The following types of evaluations apply to almost every software product.

    The software quality assurance program plan specifies which products are evaluated, and which

    evaluations are performed on those products.

    # Adherence to required format and documentation standards

  • 7/31/2019 ISTQB Advance Level Exam

    7/31

    # Compliance with contractual requirements

    # Internal consistency

    # Understandability

    # Traceability to indicated documents

    # Consistency with indicated documents

    # Appropriate requirements analysis, design, coding techniques used to prepare item

    # Appropriate allocation of sizing, timing resources

    # Adequate test coverage of requirements

    # Testability of requirements

    # Consistency between data definition and use

    # Adequacy of test cases, test procedures

    # Completeness of testing

    # Completeness of regression testing

    >

    Q. 19: What are the constituents of an effective error reporting system?

    No matter what software engineering techniques are used, errors seem to be a fact of life. Maintaining

    an effective error reporting system, however will help minimize the potential impact of software errors.

    Every software development project should establish an error reporting system even if it consists of

    notes scribbled on the back of a napkin. It takes valuable resources to detect each error, but they are

    wasted if they must be used to locate an error that had been previously detected.

    An effective error reporting system is the one that addresses the following areas.

    1. Identification of Defect

    2. Analysis of Defect

    3. Correction of Defect

    4. Implementation of Correction

    5. Regression Testing

    6. Categorization of Defect

    7. Relationship to Development Phases

    >

    Q. 20: How do we categorize defects?

    Errors can frequently be grouped into categories, which will allow future data analysis of errors

    encountered. The most efficient time to categorize them is usually as they are resolved while the

    information is still fresh. Possible classifications for error categorization are

  • 7/31/2019 ISTQB Advance Level Exam

    8/31

    # Error type: Requirements, design, code, test, etc.

    # Error priority: No work around available, work around available, cosmetic.

    # Error frequency: Recurring, non-recurring.

    ISTQB Advanced CTAL Exam Study Guide (Part 3)

    Q. 21: What is the underlying objective of Regression Testing?

    Retesting the affected function is necessary after the change is incorporated since as many as 20

    percent of all corrections result in additional errors. Frequently, additional functions will need to be

    tested to ensure that no latent defects were induced by the correction, one method of resolution would

    be to treat them as new errors and initiate a new error report. The description of regression testing

    should include:

    # A list of test paragraphs/objectives retested

    # The version of software used to perform regression test# Indication of successful/unsuccessful accomplishment of test.

    >

    Q. 22: What is the utility of an Error Frequency Report?

    Error frequency charts report the quantity of errors per unit of software product. The unit used may be

    a section of a requirements specification, a test procedure paragraph, a source code program unit, or

    any other objectively identifiable component of a software. The utility of an error frequency report is

    based on the Pareto Principle of non homogeneous error distribution. If errors are non homogeneously

    distributed in the product to be examined, then units with high detected error frequencies will probablyalso have a larger than normal number of latent errors.

    >

    Q. 23: What is the Adequacy Criteria for testing?

    Verification of software by testing or other means, is quite indirect. It is necessary to judiciously

    constrain the verification process. Conditions that are required to be satisfied during testing are called

    adequacy criteria. For example, testing may be considered inadequate if the test data do not include

    boundary cases specified by the requirements, do not cause execution of every line of code, or do not

    cause the software to deal with error-prone situations.

    The intent in establishing these criteria is to improve the quality of the testing. As such, adequacy

    criteria serve a purpose somewhat akin to software development standards. Adequacy criteria act as

    both specifier and judge: as specifier by indicating the constraints that must be satisfied by the testing,

    and as judge by indicating deficiencies in a particular test. Adequacy criteria for testing are generally

    expressed by stating some required coverage the test data should achieve. Desirable coverages include

    the required features, the software structure, or the potential errors that might occur in the life cycle.

  • 7/31/2019 ISTQB Advance Level Exam

    9/31

    >

    Q. 24: What are the limitations of testing?

    Some problems cannot be solved on a computer because are either intractable or undecidable. An

    undecidable problem is one for which no algorithmic solution is possible. In general, programs cannot be

    exhaustively tested (tested for each input). Huang shows that to test exhaustively a program that reads

    two 32-bit integers would take on the order of 50 billion years. Even if the input space is smaller, on the

    very first input it may be the case that the program does not halt within a reasonable time. It may even

    be the case that it is obvious the correct output will be produced if the program ever does halt.

    Exhaustive testing can only be completed, therefore, if all non-halting cases can be detected and

    eliminated.

    Another limitation on the power of testing is its reliance on an oracle. An oracle is a mechanism that

    judges whether or not a given output is correct for a given input. In some cases, no oracle may be

    available e.g., when the program is written to compute an answer that cannot, in practice, be computed

    by hand. Imperfect oracles may be available, but their use is risky. The absence of an oracle, or thepresence of an imperfect oracle weakens significantly any conclusions drawn from testing.

    >

    Q. 25: What is Fault Seeding?

    Fault seeding is a statistical method used to assess the number and nature of the faults remaining in a

    program. First, faults are seeded into a program. Then the program is tested, and the number of faults

    discovered is used to estimate the number of faults yet undiscovered. A difficulty with this technique is

    that the faults seeded must be representative of the yet-undiscovered faults in the program.

    >

    Q. 26: What is Mutation Analysis?

    Mutation analysis uses fault seeding to investigate properties of test data. Programs with seeded faults

    are called mutants. Mutants are executed to determine whether or not they behave differently from the

    original program. Mutants that behave differently are said to have been killed by the test. The product

    of mutation analysis is a measure of how well test data kill mutants. Mutants are produced by applying a

    mutation operator. Such an operator changes a single expression in the program to another expression,

    selected from a finite class of expressions. For example, a constant might be incremented by one,

    decremented by one, or replaced by zero, yielding one of three mutants. Applying the mutationoperators at each point in a program where they are applicable forms a finite, albeit large, set of

    mutants.

    >

    Q. 27: What are the conditions necessary for a fault to cause a program failure?

    Three conditions necessary and sufficient for a fault to cause a program failure are execution, infection,

  • 7/31/2019 ISTQB Advance Level Exam

    10/31

    and propagation. The fault location must be executed, the resulting data state must be infected with an

    erroneous value, and the succeeding computation must propagate the infection through erroneous data

    states, producing a failure.

    Sensitivity analysis investigates the three conditions required for failure, with particular focus on

    infection and propagation of errors.

    Infection analysis employs mutation analysis to determine the probability of a data states being

    infected after a potentially faulty statement is executed.

    Propagation analysis mutates the data state to determine the probability that an infected data state will

    cause a program failure.

    >

    Q. 28: What is Symbolic Analysis?

    Symbolic analysis seeks to describe the function computed by a program in a more general way. Asymbolic execution system accepts three inputs: a program to be interpreted, symbolic input for the

    program, and the path to follow. It produces two outputs: the symbolic output that describes the

    computation of the selected path, and the path condition for that path. The specification of the path can

    be either interactive or pre-selected. The symbolic output can be used to prove the program correct

    with respect to its specification, and the path condition can be used for generating test data to exercise

    the desired path. Structured data types cause difficulties, however, since it is sometimes impossible to

    deduce what component is being modified.

    >

    Q. 29: What is Specification Oriented Testing?

    Program testing is specification-oriented when test data are developed from documents and

    understandings intended to specify a modules behavior. Sources include, but are not limited to, the

    actual written specification and the high- and low-level designs of the code to be tested. The goal is to

    test for the presence of each (required) software feature, including the input domains, the output

    domains, categories of inputs that should receive equivalent processing, and the processing functions

    themselves.

    Specification-oriented testing seeks to show that every requirement is addressed by the software. An

    unimplemented requirement may be reflected in a missing path or missing code in the software.

    Specification-oriented testing assumes a functional view of the software and sometimes is calledfunctional or black-box testing.

    >

  • 7/31/2019 ISTQB Advance Level Exam

    11/31

    Q. 30: What is Input Domain Testing?

    It is the testing based on the interface of the module. In extremal testing, test data are chosen to cover

    the extremes of the input domain. Similarly, midrange testing selects data from the interiors of domains.

    For structured input domains, combinations of extremal points for each component are chosen. This

    procedure can generate a large quantity of data.

    ISTQB Advanced CTAL Exam Study Guide (Part 4)

    Q. 31: What is Equivalence partitioning?

    Specifications frequently partition the set of all possible inputs into classes that receive equivalent

    treatment. Such partitioning is called equivalence partitioning. A result of equivalence partitioning is the

    identification of a finite set of functions and their associated input and output domains.

    Input constraints and error conditions can also result from this partitioning. Once these partitions have

    been developed, both extremal and midrange testing are applicable to the resulting input domains.

    Equivalence partitioning can be compared with random testing on the basis of statistical confidence in

    the probability of failure after testing is complete.

    >

    Q. 32: What is Syntax checking?

    Every robust program must parse its input and handle incorrectly formatted data. Verifying this feature

    is called syntax checking. One means of accomplishing this is to execute the program using a broad

    spectrum of test data. By describing the data with a BNF grammar, instances of the input language canbe generated using algorithms from automata theory and describe systems that provide limited control

    over the data to be generated.

    >

    Q. 33: What is Special value testing?

    Selecting test data on the basis of features of the function to be computed is called special value testing.

    This procedure is particularly applicable to mathematical computations. Properties of the function to be

    computed can aid in selecting points that will indicate the accuracy of the computed solution.

    For example, the periodicity of the sine function suggests use of test data values, which differ bymultiples of 2 Pi. Such characteristics are not unique to mathematical computations.

    >

    Q. 34: What are the Decision Tables?

    Decision tables are a concise method of representing an equivalence partitioning. The rows of a decision

  • 7/31/2019 ISTQB Advance Level Exam

    12/31

    table specify all the conditions that the input may satisfy. The columns specify different sets of actions

    that may occur. Entries in the table indicate whether the actions should be performed if a condition is

    satisfied. Typical entries are "Yes," "No," or "Dont Care." Each row of the table suggests significant test

    data. Cause-effect graphs provide a systematic means of translating English specifications into decision

    tables, from which test data can be generated.

    >

    Q. 35: What is Implementation-oriented testing?

    In implementation-oriented program testing, test data selection is guided by information derived from

    the implementation [Howden75]. The goal is to ensure that various computational characteristics of the

    software are adequately covered. It is hoped that test data that satisfy these criteria have higher

    probability of discovering faults. Each execution of a program executes a particular path. Hence,

    implementation oriented testing focuses on the following questions: What computational characteristics

    are desirable to achieve? What paths for this program achieve these characteristics? What test data will

    execute those paths? What are the computational characteristics of the set of paths executed by a given

    test set?

    Implementation oriented testing addresses the fact that only the program text reveals the detailed

    decisions of the programmer. For the sake of efficiency, a programmer might choose to implement a

    special case that appears nowhere in the specification. The corresponding code will be tested only by

    chance using specification-oriented testing, whereas use of a structural coverage measure such as

    statement coverage should indicate the need for test data for this case.

    Implementation-oriented testing schemes may be classified according to two orthogonal axes: 1) Error

    orientation and 2) Program view.

    A testing schemes error orientation is the aspect of fault discovery that is emphasized: execution,infection, or propagation. A testing schemes program view is the program abstraction source that is

    used to determine desirable computational characteristics: control flow, data flow, or computation flow.

    Program view emphasizes how a particular strategy works; error orientation emphasizes the motivation

    behind the strategy and helps one to better evaluate claims made about the strategy.

    >

    Q. 36: What is Structure-oriented testing?

    A testing technique is structure-oriented if it seeks test data that cause various structural aspects of the

    program to be exercised. Assessing the coverage achieved may involve instrumenting the code to keeptrack of which parts of the program are actually exercised during testing. The inexpensive cost of such

    instrumentation has been a prime motivation for adopting structure-oriented techniques. Further

    motivation comes from consideration of the consequences of releasing a product without having

    executed all its parts and having the customer discover faults in untested code.

    Following are the three essential components that are covered in Structure-oriented testing are:

    computations, branches & data.

  • 7/31/2019 ISTQB Advance Level Exam

    13/31

    >

    Q. 37: What is Statement testing?

    Statement testing requires that every statement in the program be executed. While it is obvious that

    achieving 100% statement coverage does not ensure a correct program, it is equally obvious that

    anything less means that there is code in the program that has never been executed.

    >

    Q. 38: What is Branch testing?

    Achieving 100% statement coverage does not ensure that each branch in the program flow graph has

    been executed. For example, executing an if... then statement (no else) when the tested condition is

    true, tests only one of two branches in the flow graph. Branch testing seeks to ensure that every branch

    has been executed. Branch coverage can be checked by probes inserted at points in the program thatrepresent arcs from branch points in the flow graph. This instrumentation suffices for statement

    coverage as well.

    >

    Q. 39: What is Data coverage testing?

    In some programs, a portion of the flow control is determined by the data, rather than by the code.

    Knowledge-based applications, some AI applications, and table-driven code are all examples of this

    phenomenon. Data coverage testing seeks to ensure that various components of the data are

    "executed," i.e., they are referenced or modified by the interpreter as it executes. Paralleling statementtesting, one can ensure that each data location is accessed. Furthermore, in the area of knowledge

    bases, data items can be accessed in different orders, so it is important to cover each of these access

    orders. These access sequences are analogous to branch testing.

    >

    Q. 40: What is Infection-oriented testing?

    A testing technique is considered Infection-oriented if it seeks to establish conditions suitable for

    infections to arise at locations of potential faults. Following are the testing techniques that require test

    data to force infections if faults exist.

    1) Conditional testing

    2) Expression testing

    3) Domain testing

    4) Perturbation testing

    5) Fault sensitivity testing

  • 7/31/2019 ISTQB Advance Level Exam

    14/31

  • 7/31/2019 ISTQB Advance Level Exam

    15/31

  • 7/31/2019 ISTQB Advance Level Exam

    16/31

    Path coverage does not imply condition coverage or expression coverage, since an expression may

    appear on multiple paths but some subexpressions may never assume more than one value.

    >

    Q. 48: What is Compiler-based testing?

    In a compiler based testing input-output pairs are encoded as a comment in a procedure, as a partial

    specification of the function to be computed by that procedure. The procedure is then executed for

    each of the input values and checked for the output values. The test is considered adequate only if each

    computational or logical expression in the procedure is determined by the test; i.e., no expression can

    be replaced by a simpler expression and still pass the test.

    Simpler is defined in a way that allows only finitely many substitutions. Thus, as the procedure is

    executed, each possible substitution is evaluated on the data state presented to the expression. Those

    that do not evaluate the same as the original expression are rejected. Substitutions that evaluate the

    same, but ultimately produce failures, are likewise rejected.

    >

    Q. 49: What is Data Flow Testing?

    Data flow analysis can form the basis for testing exploiting the relationship between points where

    variables are defined and points where they are used. By insisting on the coverage of various definition-

    use pairs, data flow testing establishes some of the conditions necessary for infection and partial

    propagation. The motivation behind data flow testing is that test data are inadequate if they do not

    exercise these various definitions use combinations. It is clear that an incorrect definition that is never

    used during a test will not be caught by that test. Similarly, if a given location incorrectly uses a

    particular definition, but that combination is never tried during a test, the fault will not be detected.

    Data flow connections may be determined statically or dynamically. Some connections may be infeasible

    due to the presence of infeasible subpaths. Heuristics may be developed for generating test data based

    on data flow information.

    >

    Q. 50: What is Mutation testing?

    Mutation testing uses mutation analysis to judge the adequacy of test data. The test data are judged

    adequate only if each mutant is either functionally equivalent to the original program or computes

    output different from the original program on the test data. Inadequacy of the test data implies that

    certain faults can be introduced into the code and go undetected by the test data.

  • 7/31/2019 ISTQB Advance Level Exam

    17/31

  • 7/31/2019 ISTQB Advance Level Exam

    18/31

    Q. 53: What is Fault-based testing?

    Fault-based testing aims at demonstrating that certain prescribed faults are not in the code. Fault-based

    testing methods differ in both extent and breadth. One with local extent demonstrates that a fault has a

    local effect on computation; it is possible that this local effect will not produce a program failure. A

    method with global extent demonstrates that a fault will cause a program failure. Breadth is determined

    by whether the technique handles a finite or an infinite class of faults. Extent and breadth are

    orthogonal. Infection- and propagation-oriented techniques could be classified as fault-based if they are

    interpreted as seeking to demonstrate the absence of particular faults. Infection-oriented techniques

    are of local extent.

    Morell has defined a fault-based method based on symbolic execution that permits elimination of

    infinitely many faults through evidence of global failures. Symbolic faults are inserted into the code,

    which is then executed on real or symbolic data. Program output is then an expression in terms of the

    symbolic faults. It thus reflects how a fault at a given location will impact the programs output. This

    expression can be used to determine actual faults that could not have been substituted for the symbolic

    fault and remain undetected by the test.

    >

    Q. 54: What is a software life cycle model?

    A sofware life cycle model is either a descriptive or prescriptive characterization of software evolution.

    Typically, it is easier to articulate a prescriptive life cycle model for how software systems should be

    developed. This is possible since most such models are intuitive. This means that many software

    development details can be ignored, glossed over, or generalized. This, of course, should raise concernfor the relative validity and robustness of such life cycle models when developing different kinds of

    application systems in different kinds of development settings. Descriptive life cycle models, on the

    other hand, characterize how software systems are actually developed. As such, they are less common

    and more difficult to articulate for an obvious reason: one must observe or collect data throughout the

    development of a software system, a period of elapsed time usually measured in years. Also, descriptive

    models are specific to the systems observed, and only generalizable through systematic analysis.

    Therefore, this suggests the prescriptive software life cycle models will dominate attention until a

    sufficient base of observational data is available to articulate empirically grounded descriptive life cycle

    models.

    >

  • 7/31/2019 ISTQB Advance Level Exam

    19/31

    Q. 55: How can we use software life cycle models?

    Some of the ways these models can be used include:

    1) To organize, plan, staff, budget, schedule and manage software project work over organizational

    time, space, and computing environments.

    2) As prescriptive outlines for what documents to produce for delivery to client.

    3) As a basis for determining what software engineering tools and methodologies will be most

    appropriate to support different life cycle activities.

    4) As frameworks for analyzing or estimating patterns of resource allocation and consumption during

    the software life cycle.

    5) As comparative descriptive or prescriptive accounts for how software systems come to be the way

    they are.

    6) As a basis for conducting empirical studies to determine what affects software productivity, cost, and

    overall quality.

    >

    Q. 56: What is a software process model?

    A software process model often represents a networked sequence of activities, objects,

    transformations, and events that embody strategies for accomplishing software evolution. Such models

    can be used to develop more precise and formalized descriptions of software life cycle activities. Theirpower emerges from their utilization of a sufficiently rich notation, syntax, or semantics, often suitable

    for computational processing.

    Software process networks can be viewed as representing methodical task chains. Task chains structure

    the transformation of computational entities through a passage of sequence of actions that denote each

    process activity. Task chains are idealized plans of what actions should be accomplished, and in what

    order.

    >

    Q. 57: What is the difference between Evolutionistic and Evolutionary Models?

    Every model of software evolution makes certain assumptions about what is the meaning of evolution.

    In one such analysis of these assumptions, two distinct views are apparent:

    Evolutionistic models focus attention to the direction of change in terms of progress through a series of

    stages eventually leading to some final stage; evolutionary models on the other hand focus attention to

  • 7/31/2019 ISTQB Advance Level Exam

    20/31

    the mechanisms and processes that change systems. Evolutionistic models are often intuitive and useful

    as organizing frameworks for managing and tooling software development efforts. But they are poor

    predictors of why certain changes are made to a system, and why systems evolve in similar or different

    ways.

    Evolutionary models are concerned less with the stage of development, but more with the technologicalmechanisms and organizational processes that guide the emergence of a system over space and time. As

    such, it should become apparent that the traditional models are evolutionistic, while the most of the

    alternative models are evolutionary.

    >

    Q. 58: What is Classic Software life cycle?

    The classic software life cycle is often represented as a simple waterfall software phase model, where

    software evolution proceeds through an orderly sequence of transitions from one phase to the next in

    linear order. Such models resemble finite state machine descriptions of software evolution. However,

    such

    models have been perhaps most useful in helping to structure and manage large software development

    projects in organizational settings.

    >

    Q. 59: What is the Spiral Model or Non-Operational Process Model?

    The spiral model of software development and evolution represents a risk-driven approach to softwareprocess analysis and structuring. The approach incorporates elements of specification-driven and

    prototype driven process methods. It does so by representing iterative development cycles in a spiral

    manner, with inner cycles denoting early analysis and prototyping, and outer cycles denoting the classic

    system life cycle.

    The radial dimension denotes cumulative development costs, and the angular dimension denotes

    progress made in accomplishing each development spiral. Risk analysis, which seeks to identify

    situations, which might cause a development effort to fail or go over budget/schedule, occurs during

    each spiral cycle. In each cycle, it represents roughly the same amount of angular displacement, while

    the displaced sweep volume denotes increasing levels of effort required for risk analysis. Systemdevelopment in this model therefore spirals out only so far as needed according to the risk that must be

    managed.

    >

  • 7/31/2019 ISTQB Advance Level Exam

    21/31

  • 7/31/2019 ISTQB Advance Level Exam

    22/31

    Q. 62: What are the types of errors targeted by regression testing?

    1) Data corruption errors:These errors are side effects due to shared data.

    2) Inappropriate control sequencing errors:These errors are side effects due to changes in execution

    sequences. An example of this type of error is the attempt to remove an item from a queue before it isplaced into the queue.

    3) Resource contention:Examples of these types of errors are potential bottlenecks and deadlocks.

    4) Performance deficiencies:These include timing and storage utilization errors.

    >

    Q. 63: What are the types of errors targeted by integration testing?

    1) Import/export range errors:This type of error occurs when the source of input parameters falls

    outside of the range of their destination. For example, assume module A calls module B with table

    pointer X. IfA assumes a maximum table size of 10 and B assumes a maximum table size of 8, an

    import/export range error occurs. The detection of this type of error requires careful boundary-value

    testing of parameters.

    2) Import/export type compatibility errors:This type of error is attributed to a mismatch of user-

    defined

    types. These errors are normally detected by compilers or code inspections.

    3) Import/export representation errors:This type of error occurs when parameters are of the sametype, but the meaning of the parameters is different in the calling and called modules. For example,

    assume module A passes a parameter Elapsed_Time, of type real, to module B. Module A might pass the

    value as seconds, while module Bis assuming the value is passed as milliseconds. These types of errors

    are difficult to detect, although range checks and inspections provide some assistance.

    4) Parameter utilization errors:Dangerous assumptions are often made concerning whether a module

    called will alter the information passed to it. Although support for detecting such errors is provided by

    some compilers, careful testing and/or inspections may be necessary to insure that values have not

    been unexpectedly corrupted.

    5) Integration time domain/ computation errors:A domain error occurs when a specific input follows

    the wrong path due to an error in the control flow. A computation error exists when a specific input

    follows the correct path, but an error in some assignment statement causes the wrong function to be

    computed. Although domain and computation errors are normally addressed during module testing, the

    concepts apply across module boundaries. In fact, some domain and computation errors in the

  • 7/31/2019 ISTQB Advance Level Exam

    23/31

    integrated program might be masked during integration testing if the module being integrated is

    assumed to be correct and is treated as a black box.

    >

    Q. 64: What are the different strategies for integration testing?

    Several strategies for integration testing exist. These strategies may be used independently or in

    combination. The primary techniques are:

    1) Top-down integration: Top-down integration attempts to combine incrementally the components of

    the program, starting with the topmost element and simulating lower level elements with stubs. Each

    stub is then replaced with an actual program component as the integration process proceeds in a top-

    down fashion. Top-down integration is useful for those components of the program with complicated

    control structures. It also provides visibility into the integration process by demonstrating a potentially

    useful product early.

    2) Bottom-up integration: Bottom-up integration attempts to combine incrementally components of

    the program starting with those components that do not invoke other components. Test drivers must be

    constructed to invoke these components. As bottom-up integration proceeds, test drivers are replaced

    with the actual program components that perform the invocation, and new test drivers are constructed

    until the "top" of the program is reached. Bottom-up integration is consistent with the notion of

    developing software as a series of building blocks. Bottom-up integration should proceed wherever the

    driving control structure is not too complicated.

    3) Big-bang integration: Big-bang integration is not an incremental strategy and involves combining and

    testing all modules at once. Except for small programs, big-bang integration is not a cost-effective

    technique because of difficulty of isolating integration testing failures.

    4) Threaded integration: Threaded integration is an incremental technique that identifies major

    processing functions that the product is to perform and maps these functions to modules implementing

    them. Each processing function is called a thread. A collection of related threads is often called a build.

    Builds may serve as a basis for test management. To test a thread, the group of modules corresponding

    to the thread is combined. For those modules in the thread with interfaces to other modules not

    supporting the thread, stubs are used. The initial threads to be tested normally correspond to the

    "backbone" or "skeleton" of the product under test. The addition of new threads for the product

    undergoing integration proceeds incrementally in a planned fashion.

    >

  • 7/31/2019 ISTQB Advance Level Exam

    24/31

    Q. 65: What are the guidelines for selecting paths in a transaction Flow?

    1)Test every link/decision in the transaction flow graph.

    2) Test each loop with a single, double, typical, maximum, and maximum- less-one number of iterations

    3) Test combinations of paths within and between transaction flows.

    4) Test that the system does not do things that it is not supposed to do, by watching for unexpected

    sequences of paths within and between transaction flows.

    Once the transaction flows have been identified black-box testing techniques can be utilized to generate

    test data for selected paths through the transaction flow diagram.

    >

    Q. 66: What is Failure Analysis?

    Failure analysis is the examination of the products reaction to failures of hardware or software. The

    products specifications must be examined to determine precisely which types of failures must be

    analyzed and what the products reaction must be. Failure analysis is sometimes referred to as "recovery

    testing".

    Failure analysis must be performed during each of the products V&V activities. It is essential during

    requirement and specification V&V activities that a clear statement of the products response to various

    types of failures be addressed in terms that allow analysis. The design must also be analyzed to show

    that the products reaction to failures satisfies its specifications. The failure analysis of implementationsoften occurs during system testing. This testing may take the form of simulating hardware or software

    errors or actual introduction of these types of errors.

    Failure analysis is essential to detecting product recovery errors. These errors can lead to lost files, lost

    data, duplicate transactions, etc. Failure analysis techniques can also be combined with other

    approaches during V&V activities to insure that the products specifications for such attributes as

    performance, security, safety, usability, etc., are met.

    >

    Q. 67: What is Concurrency Analysis?

    Concurrency analysis examines the interaction of tasks being executed simultaneously within the

    product to insure that the overall specifications are being met. Concurrent tasks may be executed in

    parallel or have their execution interleaved. Concurrency analysis is sometimes referred to as

    "background testing".

  • 7/31/2019 ISTQB Advance Level Exam

    25/31

    For products with tasks that may execute in parallel, concurrency analysis must be performed during

    each of the products V&V activities. During design, concurrency analysis should be performed to

    identify such issues as potential contention for resources, deadlock, and priorities. A concurrency

    analysis for implementations normally takes place during system testing. Tests must be designed,

    executed, and analyzed to exploit the parallelism in the system and insure that the specifications are

    met.

    >

    Q. 68: What is Performance Analysis?

    The goal of performance analysis is to insure that the product meets its specified performance

    objectives. These objectives must be stated in measurable terms, so far as possible. Typical performance

    objectives relate to response time and system throughput.

    A performance analysis should be applied during each of the products V&V activities. Duringrequirement and specification V&V activities, performance objectives must be analyzed to insure

    completeness, feasibility, and testability. Prototyping, simulation, or other modeling approaches may be

    used to insure feasibility. For designs, the performance requirements must be allocated to individual

    components.

    These components can then be analyzed to determine if the performance requirements can be met.

    Prototyping, simulation and other modeling approaches again are techniques applicable to this task. For

    implementations a performance analysis can take place during each level of testing. Test data must be

    carefully constructed to correspond to the scenarios for which the performance requirements were

    specified.

    >

    Q. 69: What is Proof of Correctness?

    Proof of correctness is a collection of techniques that apply the formality and rigor of mathematics to

    the task of proving the consistency between an algorithmic solution and a rigorous, complete

    specification of the intent of the solution. This technique is also often referred to as "formal

    verification."

    Proof of correctness techniques are normally represented in the context of verifying an implementation

    against a specification. The techniques are also applicable in verifying the correctness of other products,

    as long as they possess a formal representation.

    >

  • 7/31/2019 ISTQB Advance Level Exam

    26/31

    Q. 70: What are the different types of evaluations for assuring software quality?

    Different evaluation types are:

    1) Internal consistency of product

    2) Understandability of product

    3) Traceability to indicated documents

    4) Consistency with indicated documents

    5) Appropriate allocation of sizing, timing resources

    6) Adequate test coverage of requirements

    7) Consistency between data definitions and use

    8) Adequacy of test cases and test procedures

    9) Completeness of testing

    10) Completeness of regression testing

    ISTQB Advanced CTAL Exam Study Guide (Part 8)

    Q. 71: What type of information to be collected and documented for problem tracking as a part of

    V&V plan?

    1) When the problem had occurred

    2) Where the problem had occurred

    3) State of the system before occurrence

    4) Evidence of the problem

    5) Actions or inputs that appear to have led to occurrence

    6) Description of how the system should work; reference to relevant requirements7) Priority for solving problem

    8) Technical contact for additional information

    >

    Q. 72: What type of data needs to be collected and documented for tracking of test activities as a part

    of V&V plan?

    1) Number of tests executed

    2) Number of tests remaining

    3) Time used

    4) Resources used

    5) Number of problems found and the time spent finding them

    This data can then be used to track actual test progress against scheduled progress. The tracking

    information is also important for future test scheduling.

  • 7/31/2019 ISTQB Advance Level Exam

    27/31

    >

    Q. 73: Explain the Shewarts Plan-Act-Check-Do paradigm?

    Shewarts paradigm, applied to both the software product and process, generally consists of the

    following activities:

    1) Plan: The SEI capability maturity model is a general framework or plan for developing five increasingly

    improved levels (initial, repeatable, defined, managed, and optimizing) of software process maturity.

    Because the CMM is designed to be generic, each organization must customize its process improvement

    plan for its own application(s), environment, and company organization. The five levels are designed as a

    logical progression, so each level must be achieved, in order, from one to five. It is not possible to skip

    levels.

    2) Act: Because software is not produced by a manufacturing process, software designers must both

    strive to meet the users functional requirements for the product and design for correct implementation

    and easy maintainability.

    3) Check: Software inspections and peer reviews are the major product control mechanism used.

    Quantifiable inspections results such as change requests provide the foundation for measurable process

    control and improvement. Audits are the most usual process verification process.

    Auditors need to examine not only whether the standards, procedures, and tools are adequate,

    but they also to see how well the project is following the prescribed process plans.

    4) Do: Software quality control is often specified both by the customer acceptance criteria in the

    contract or requirements specification and by whether the software product meets written standards.

    Software measures are used to measure product quality in a quantifiable way.

    The most common process quality control approach is tracking actual against expected performance.

    Causes for significant deviation from the plan are found and corrected.

    >

    Q. 74: What are the uses of the CMM?

    The capability maturity model, developed by the Software Engineering Institute, is designed to help both

    development organizations and customers (government organizations or companies who acquire

    software). Software organizations need to understand the quality of their software process and how to

    improve it. Organizations contracting for software need ways to evaluate a potential contractors

    capability to carry out the work.

  • 7/31/2019 ISTQB Advance Level Exam

    28/31

    The CMM has four intended uses to help organizations improve their software process capabilities:

    1) Identify improvements

    2) Identify risks in selecting contractors

    3) Implement a process improvement program

    4) Guide definition and development of the software process

    >

    Q. 75: What is SPA "Software Process Assessment" method for applying CMM?

    A software process assessment is an in-house determination, primarily of the weaknesses of the

    software process in an organization as a whole. It is an internal tool that an organization can choose as a

    part of an overall program for improving its ability to produce high-quality products on time and within

    budget.

    The objectives of the SPA method are as under:

    (1) Identify strengths, weaknesses, and existing improvement activities on which to base an

    organization-wide improvement effort and

    (2) To get organizational buy-in to that effort.

    The method is used to help an organization identify key areas for improvement, begin to baseline its

    software process, and initiate improvements.

    >

    Q. 76: What is SCE "Software Capability Evaluation" method for applying CMM?

    A software capability evaluation is an independent evaluation of an organizations software process as it

    relates to a particular acquisition. It is a tool that helps an external group (an "acquirer") determine the

    organizations ability to produce a particular product having high quality and to produce it on time and

    within budget.

    The objective of the SCE method is to identify strengths, weaknesses, and existing improvement

    activities in a suppliers software process that best indicate the risk associated with using that supplier

    for a particular software acquisition. The method is used to identify software risks, help in mitigating

    these risks, and motivate initiation of improvement programs.

  • 7/31/2019 ISTQB Advance Level Exam

    29/31

  • 7/31/2019 ISTQB Advance Level Exam

    30/31

    achieving the goals of the KPA. Goals can be used to resolve whether an organization or project has

    adequately implemented a key process area. Goals signify the scope, boundaries, and intent of each key

    process area.

    Key process areas are building blocksfundamental activities for organizations trying to improve their

    software process. Other process areas exist, but these were selected as particularly effective in

    improving process capability. Each key process area is unique to a single maturity level.

    >

    Q. 79: What are the Key Practices in CMM?

    Key practices are the lowest level, specific details of the CMM. Key practices define each key process

    area 3 by specifying policies, procedures, and activities that contribute to satisfying its goal. They are a

    working definition of the key process area.

    Key practices provide a link between the CMM and the maturity questionnaire. Specific questions relate

    to specific key practices. Industry experience and empirical studies were used to identify the key

    practices chosen by the SEI. Each key practice describes, but does not mandate, how that practice

    should be performed.

    >

    Q. 80: What are the Maturity Questionnaire in CMM?

    The maturity questionnaire consists of questions about the software process that sample the practicesin each key process area.

    The maturity questionnaire is a springboard for an assessment or evaluation teams visit. The CMM

    provides a hierarchical structure that guides the team in investigating an organizations software

    process. Answers to the questions identify process strengths and weaknesses in terms of key process

    areas. Questions in the maturity questionnaire are designed to determine the presence or absence of

    the various key practices. Questions are not open-ended, but are intended to obtain a quantified result

    from the following answers: yes, no, dont know, and not applicable.

    The SCE team identifies strengths, weaknesses, and improvement activities that they consider to bemost relevant to performance on the acquisition contract. A group in the acquisition agency then

    transforms the findings into acquisition risks and/or technical ratings, which, along with other criteria,

    the agency can use to select a source or monitor a contract.

    The SPA team also analyzes the questionnaire data to determine the current software process maturity

    level, identify key findings (that is, determine what will impede capability to produce quality software),

  • 7/31/2019 ISTQB Advance Level Exam

    31/31

    and note strengths the organization can build upon. The team presents the results to senior

    management and, often, to the entire organization that was assessed. The team often enlists the aid of

    others within the organization to make recommendations for process improvement actions. An action

    planning group (often a software engineering process group, under the guidance of a management

    steering committee) develops the strategies for accomplishing long-term process improvement and

    determines what improvements are achievable within a specific time frame. They work with many

    others in the organization to create an action plan and implement it.